00:00:00.000 Started by upstream project "autotest-per-patch" build number 132760 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.104 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.105 The recommended git tool is: git 00:00:00.105 using credential 00000000-0000-0000-0000-000000000002 00:00:00.107 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.163 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.216 Using shallow fetch with depth 1 00:00:00.216 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.216 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.275 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.275 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.696 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.709 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.721 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.721 > git config core.sparsecheckout # timeout=10 00:00:05.732 > git read-tree -mu HEAD # timeout=10 00:00:05.746 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.773 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.773 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.874 [Pipeline] Start of Pipeline 00:00:05.890 [Pipeline] library 00:00:05.892 Loading library shm_lib@master 00:00:05.892 Library shm_lib@master is cached. Copying from home. 00:00:05.911 [Pipeline] node 00:00:05.931 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.933 [Pipeline] { 00:00:05.944 [Pipeline] catchError 00:00:05.946 [Pipeline] { 00:00:05.957 [Pipeline] wrap 00:00:05.964 [Pipeline] { 00:00:05.976 [Pipeline] stage 00:00:05.978 [Pipeline] { (Prologue) 00:00:06.006 [Pipeline] echo 00:00:06.008 Node: VM-host-WFP1 00:00:06.019 [Pipeline] cleanWs 00:00:06.031 [WS-CLEANUP] Deleting project workspace... 00:00:06.031 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.036 [WS-CLEANUP] done 00:00:06.270 [Pipeline] setCustomBuildProperty 00:00:06.392 [Pipeline] httpRequest 00:00:07.094 [Pipeline] echo 00:00:07.095 Sorcerer 10.211.164.20 is alive 00:00:07.103 [Pipeline] retry 00:00:07.105 [Pipeline] { 00:00:07.117 [Pipeline] httpRequest 00:00:07.120 HttpMethod: GET 00:00:07.121 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.121 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.143 Response Code: HTTP/1.1 200 OK 00:00:07.143 Success: Status code 200 is in the accepted range: 200,404 00:00:07.144 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.428 [Pipeline] } 00:00:13.444 [Pipeline] // retry 00:00:13.451 [Pipeline] sh 00:00:13.738 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.757 [Pipeline] httpRequest 00:00:14.313 [Pipeline] echo 00:00:14.315 Sorcerer 10.211.164.20 is alive 00:00:14.325 [Pipeline] retry 00:00:14.327 [Pipeline] { 00:00:14.341 [Pipeline] httpRequest 00:00:14.346 HttpMethod: GET 00:00:14.347 URL: http://10.211.164.20/packages/spdk_42416bc2ced783e3d51234bfd1b556746b79e238.tar.gz 00:00:14.348 Sending request to url: http://10.211.164.20/packages/spdk_42416bc2ced783e3d51234bfd1b556746b79e238.tar.gz 00:00:14.369 Response Code: HTTP/1.1 200 OK 00:00:14.370 Success: Status code 200 is in the accepted range: 200,404 00:00:14.371 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_42416bc2ced783e3d51234bfd1b556746b79e238.tar.gz 00:01:45.709 [Pipeline] } 00:01:45.728 [Pipeline] // retry 00:01:45.736 [Pipeline] sh 00:01:46.019 + tar --no-same-owner -xf spdk_42416bc2ced783e3d51234bfd1b556746b79e238.tar.gz 00:01:48.570 [Pipeline] sh 00:01:48.853 + git -C spdk log --oneline -n5 00:01:48.853 42416bc2c lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:01:48.853 20bebc997 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:01:48.853 3fb854a13 lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:01:48.853 f501a7223 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:01:48.853 8ffb12d0f lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:01:48.873 [Pipeline] writeFile 00:01:48.888 [Pipeline] sh 00:01:49.174 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:49.187 [Pipeline] sh 00:01:49.473 + cat autorun-spdk.conf 00:01:49.473 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.473 SPDK_TEST_NVME=1 00:01:49.473 SPDK_TEST_FTL=1 00:01:49.473 SPDK_TEST_ISAL=1 00:01:49.473 SPDK_RUN_ASAN=1 00:01:49.473 SPDK_RUN_UBSAN=1 00:01:49.473 SPDK_TEST_XNVME=1 00:01:49.473 SPDK_TEST_NVME_FDP=1 00:01:49.473 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.480 RUN_NIGHTLY=0 00:01:49.482 [Pipeline] } 00:01:49.495 [Pipeline] // stage 00:01:49.510 [Pipeline] stage 00:01:49.512 [Pipeline] { (Run VM) 00:01:49.524 [Pipeline] sh 00:01:49.809 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:49.809 + echo 'Start stage prepare_nvme.sh' 00:01:49.809 Start stage prepare_nvme.sh 00:01:49.809 + [[ -n 2 ]] 00:01:49.809 + disk_prefix=ex2 00:01:49.809 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:49.809 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:49.809 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:49.809 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.809 ++ SPDK_TEST_NVME=1 00:01:49.809 ++ SPDK_TEST_FTL=1 00:01:49.809 ++ SPDK_TEST_ISAL=1 00:01:49.809 ++ SPDK_RUN_ASAN=1 00:01:49.809 ++ SPDK_RUN_UBSAN=1 00:01:49.809 ++ SPDK_TEST_XNVME=1 00:01:49.809 ++ SPDK_TEST_NVME_FDP=1 00:01:49.809 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.809 ++ RUN_NIGHTLY=0 00:01:49.809 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:49.809 + nvme_files=() 00:01:49.809 + declare -A nvme_files 00:01:49.809 + backend_dir=/var/lib/libvirt/images/backends 00:01:49.809 + nvme_files['nvme.img']=5G 00:01:49.809 + nvme_files['nvme-cmb.img']=5G 00:01:49.809 + nvme_files['nvme-multi0.img']=4G 00:01:49.809 + nvme_files['nvme-multi1.img']=4G 00:01:49.809 + nvme_files['nvme-multi2.img']=4G 00:01:49.809 + nvme_files['nvme-openstack.img']=8G 00:01:49.809 + nvme_files['nvme-zns.img']=5G 00:01:49.809 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:49.809 + (( SPDK_TEST_FTL == 1 )) 00:01:49.809 + nvme_files["nvme-ftl.img"]=6G 00:01:49.809 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:49.809 + nvme_files["nvme-fdp.img"]=1G 00:01:49.809 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:49.809 + for nvme in "${!nvme_files[@]}" 00:01:49.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:01:49.809 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:49.809 + for nvme in "${!nvme_files[@]}" 00:01:49.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:01:49.809 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:49.809 + for nvme in "${!nvme_files[@]}" 00:01:49.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:01:49.809 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:49.809 + for nvme in "${!nvme_files[@]}" 00:01:49.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:01:50.070 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:50.070 + for nvme in "${!nvme_files[@]}" 00:01:50.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:01:50.070 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.070 + for nvme in "${!nvme_files[@]}" 00:01:50.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:01:50.070 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.070 + for nvme in "${!nvme_files[@]}" 00:01:50.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:01:50.070 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:50.070 + for nvme in "${!nvme_files[@]}" 00:01:50.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:01:50.070 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:50.070 + for nvme in "${!nvme_files[@]}" 00:01:50.070 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:01:50.330 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:50.330 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:01:50.330 + echo 'End stage prepare_nvme.sh' 00:01:50.330 End stage prepare_nvme.sh 00:01:50.342 [Pipeline] sh 00:01:50.628 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:50.628 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:50.628 00:01:50.628 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:50.628 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:50.628 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:50.628 HELP=0 00:01:50.628 DRY_RUN=0 00:01:50.628 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:01:50.628 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:50.628 NVME_AUTO_CREATE=0 00:01:50.628 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:01:50.628 NVME_CMB=,,,, 00:01:50.628 NVME_PMR=,,,, 00:01:50.628 NVME_ZNS=,,,, 00:01:50.628 NVME_MS=true,,,, 00:01:50.628 NVME_FDP=,,,on, 00:01:50.628 SPDK_VAGRANT_DISTRO=fedora39 00:01:50.628 SPDK_VAGRANT_VMCPU=10 00:01:50.628 SPDK_VAGRANT_VMRAM=12288 00:01:50.628 SPDK_VAGRANT_PROVIDER=libvirt 00:01:50.628 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:50.628 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:50.628 SPDK_OPENSTACK_NETWORK=0 00:01:50.628 VAGRANT_PACKAGE_BOX=0 00:01:50.628 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:50.628 FORCE_DISTRO=true 00:01:50.628 VAGRANT_BOX_VERSION= 00:01:50.628 EXTRA_VAGRANTFILES= 00:01:50.628 NIC_MODEL=e1000 00:01:50.628 00:01:50.628 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:50.628 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:53.210 Bringing machine 'default' up with 'libvirt' provider... 00:01:54.150 ==> default: Creating image (snapshot of base box volume). 00:01:54.410 ==> default: Creating domain with the following settings... 00:01:54.410 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733543196_39846998b0eaca0b1e88 00:01:54.410 ==> default: -- Domain type: kvm 00:01:54.410 ==> default: -- Cpus: 10 00:01:54.410 ==> default: -- Feature: acpi 00:01:54.410 ==> default: -- Feature: apic 00:01:54.410 ==> default: -- Feature: pae 00:01:54.410 ==> default: -- Memory: 12288M 00:01:54.410 ==> default: -- Memory Backing: hugepages: 00:01:54.410 ==> default: -- Management MAC: 00:01:54.410 ==> default: -- Loader: 00:01:54.410 ==> default: -- Nvram: 00:01:54.410 ==> default: -- Base box: spdk/fedora39 00:01:54.410 ==> default: -- Storage pool: default 00:01:54.410 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733543196_39846998b0eaca0b1e88.img (20G) 00:01:54.410 ==> default: -- Volume Cache: default 00:01:54.410 ==> default: -- Kernel: 00:01:54.410 ==> default: -- Initrd: 00:01:54.410 ==> default: -- Graphics Type: vnc 00:01:54.410 ==> default: -- Graphics Port: -1 00:01:54.410 ==> default: -- Graphics IP: 127.0.0.1 00:01:54.410 ==> default: -- Graphics Password: Not defined 00:01:54.410 ==> default: -- Video Type: cirrus 00:01:54.410 ==> default: -- Video VRAM: 9216 00:01:54.410 ==> default: -- Sound Type: 00:01:54.410 ==> default: -- Keymap: en-us 00:01:54.410 ==> default: -- TPM Path: 00:01:54.410 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:54.410 ==> default: -- Command line args: 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:54.410 ==> default: -> value=-drive, 00:01:54.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:54.410 ==> default: -> value=-drive, 00:01:54.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:54.410 ==> default: -> value=-drive, 00:01:54.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.410 ==> default: -> value=-drive, 00:01:54.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.410 ==> default: -> value=-drive, 00:01:54.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:54.410 ==> default: -> value=-drive, 00:01:54.410 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:54.410 ==> default: -> value=-device, 00:01:54.410 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:54.669 ==> default: Creating shared folders metadata... 00:01:54.669 ==> default: Starting domain. 00:01:56.577 ==> default: Waiting for domain to get an IP address... 00:02:14.688 ==> default: Waiting for SSH to become available... 00:02:14.688 ==> default: Configuring and enabling network interfaces... 00:02:18.885 default: SSH address: 192.168.121.240:22 00:02:18.885 default: SSH username: vagrant 00:02:18.885 default: SSH auth method: private key 00:02:21.423 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:31.410 ==> default: Mounting SSHFS shared folder... 00:02:32.345 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:32.345 ==> default: Checking Mount.. 00:02:34.252 ==> default: Folder Successfully Mounted! 00:02:34.252 ==> default: Running provisioner: file... 00:02:35.192 default: ~/.gitconfig => .gitconfig 00:02:35.761 00:02:35.761 SUCCESS! 00:02:35.761 00:02:35.761 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:35.761 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:35.761 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:35.761 00:02:35.771 [Pipeline] } 00:02:35.786 [Pipeline] // stage 00:02:35.795 [Pipeline] dir 00:02:35.796 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:35.798 [Pipeline] { 00:02:35.811 [Pipeline] catchError 00:02:35.813 [Pipeline] { 00:02:35.826 [Pipeline] sh 00:02:36.112 + vagrant ssh-config --host vagrant 00:02:36.112 + sed -ne /^Host/,$p 00:02:36.112 + tee ssh_conf 00:02:39.406 Host vagrant 00:02:39.406 HostName 192.168.121.240 00:02:39.406 User vagrant 00:02:39.406 Port 22 00:02:39.406 UserKnownHostsFile /dev/null 00:02:39.406 StrictHostKeyChecking no 00:02:39.406 PasswordAuthentication no 00:02:39.406 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:39.406 IdentitiesOnly yes 00:02:39.406 LogLevel FATAL 00:02:39.406 ForwardAgent yes 00:02:39.406 ForwardX11 yes 00:02:39.406 00:02:39.421 [Pipeline] withEnv 00:02:39.423 [Pipeline] { 00:02:39.437 [Pipeline] sh 00:02:39.720 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:39.720 source /etc/os-release 00:02:39.720 [[ -e /image.version ]] && img=$(< /image.version) 00:02:39.720 # Minimal, systemd-like check. 00:02:39.720 if [[ -e /.dockerenv ]]; then 00:02:39.720 # Clear garbage from the node's name: 00:02:39.720 # agt-er_autotest_547-896 -> autotest_547-896 00:02:39.720 # $HOSTNAME is the actual container id 00:02:39.720 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:39.720 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:39.720 # We can assume this is a mount from a host where container is running, 00:02:39.720 # so fetch its hostname to easily identify the target swarm worker. 00:02:39.720 container="$(< /etc/hostname) ($agent)" 00:02:39.720 else 00:02:39.720 # Fallback 00:02:39.720 container=$agent 00:02:39.720 fi 00:02:39.720 fi 00:02:39.720 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:39.720 00:02:39.993 [Pipeline] } 00:02:40.009 [Pipeline] // withEnv 00:02:40.018 [Pipeline] setCustomBuildProperty 00:02:40.035 [Pipeline] stage 00:02:40.038 [Pipeline] { (Tests) 00:02:40.055 [Pipeline] sh 00:02:40.340 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:40.615 [Pipeline] sh 00:02:40.898 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:41.173 [Pipeline] timeout 00:02:41.173 Timeout set to expire in 50 min 00:02:41.175 [Pipeline] { 00:02:41.188 [Pipeline] sh 00:02:41.492 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:42.060 HEAD is now at 42416bc2c lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:02:42.072 [Pipeline] sh 00:02:42.355 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:42.628 [Pipeline] sh 00:02:42.907 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:43.184 [Pipeline] sh 00:02:43.468 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:43.728 ++ readlink -f spdk_repo 00:02:43.728 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:43.728 + [[ -n /home/vagrant/spdk_repo ]] 00:02:43.728 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:43.728 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:43.728 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:43.728 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:43.728 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:43.728 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:43.728 + cd /home/vagrant/spdk_repo 00:02:43.728 + source /etc/os-release 00:02:43.728 ++ NAME='Fedora Linux' 00:02:43.728 ++ VERSION='39 (Cloud Edition)' 00:02:43.728 ++ ID=fedora 00:02:43.728 ++ VERSION_ID=39 00:02:43.728 ++ VERSION_CODENAME= 00:02:43.728 ++ PLATFORM_ID=platform:f39 00:02:43.728 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:43.728 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:43.728 ++ LOGO=fedora-logo-icon 00:02:43.728 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:43.728 ++ HOME_URL=https://fedoraproject.org/ 00:02:43.728 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:43.728 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:43.728 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:43.728 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:43.728 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:43.728 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:43.728 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:43.728 ++ SUPPORT_END=2024-11-12 00:02:43.728 ++ VARIANT='Cloud Edition' 00:02:43.728 ++ VARIANT_ID=cloud 00:02:43.728 + uname -a 00:02:43.728 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:43.728 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:43.988 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:44.557 Hugepages 00:02:44.557 node hugesize free / total 00:02:44.557 node0 1048576kB 0 / 0 00:02:44.557 node0 2048kB 0 / 0 00:02:44.557 00:02:44.557 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:44.557 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:44.557 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:44.557 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:44.557 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:44.557 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:44.557 + rm -f /tmp/spdk-ld-path 00:02:44.557 + source autorun-spdk.conf 00:02:44.557 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.557 ++ SPDK_TEST_NVME=1 00:02:44.557 ++ SPDK_TEST_FTL=1 00:02:44.557 ++ SPDK_TEST_ISAL=1 00:02:44.557 ++ SPDK_RUN_ASAN=1 00:02:44.557 ++ SPDK_RUN_UBSAN=1 00:02:44.557 ++ SPDK_TEST_XNVME=1 00:02:44.557 ++ SPDK_TEST_NVME_FDP=1 00:02:44.557 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.557 ++ RUN_NIGHTLY=0 00:02:44.557 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:44.557 + [[ -n '' ]] 00:02:44.557 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:44.557 + for M in /var/spdk/build-*-manifest.txt 00:02:44.557 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:44.557 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.557 + for M in /var/spdk/build-*-manifest.txt 00:02:44.557 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:44.557 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.557 + for M in /var/spdk/build-*-manifest.txt 00:02:44.557 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:44.557 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.557 ++ uname 00:02:44.557 + [[ Linux == \L\i\n\u\x ]] 00:02:44.557 + sudo dmesg -T 00:02:44.818 + sudo dmesg --clear 00:02:44.818 + dmesg_pid=5242 00:02:44.818 + [[ Fedora Linux == FreeBSD ]] 00:02:44.818 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.818 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.818 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:44.818 + sudo dmesg -Tw 00:02:44.818 + [[ -x /usr/src/fio-static/fio ]] 00:02:44.818 + export FIO_BIN=/usr/src/fio-static/fio 00:02:44.818 + FIO_BIN=/usr/src/fio-static/fio 00:02:44.818 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:44.818 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:44.818 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:44.818 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:44.818 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:44.818 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:44.818 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:44.818 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:44.818 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:44.818 03:47:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:44.818 03:47:27 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.818 03:47:27 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:44.818 03:47:27 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:44.818 03:47:27 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:44.818 03:47:27 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:44.818 03:47:27 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:44.818 03:47:27 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:44.818 03:47:27 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:44.818 03:47:27 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.818 03:47:27 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.818 03:47:27 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.818 03:47:27 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.818 03:47:27 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.818 03:47:27 -- paths/export.sh@5 -- $ export PATH 00:02:44.818 03:47:27 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:44.818 03:47:27 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:44.818 03:47:27 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:44.818 03:47:27 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733543247.XXXXXX 00:02:44.818 03:47:27 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733543247.sjXgjP 00:02:44.818 03:47:27 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:44.818 03:47:27 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:44.818 03:47:27 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:44.818 03:47:27 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:44.818 03:47:27 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:44.818 03:47:27 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:44.818 03:47:27 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:44.818 03:47:27 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.079 03:47:27 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:45.079 03:47:27 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:45.079 03:47:27 -- pm/common@17 -- $ local monitor 00:02:45.079 03:47:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.079 03:47:27 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:45.079 03:47:27 -- pm/common@25 -- $ sleep 1 00:02:45.079 03:47:27 -- pm/common@21 -- $ date +%s 00:02:45.079 03:47:27 -- pm/common@21 -- $ date +%s 00:02:45.079 03:47:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733543247 00:02:45.079 03:47:27 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733543247 00:02:45.079 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733543247_collect-vmstat.pm.log 00:02:45.079 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733543247_collect-cpu-load.pm.log 00:02:46.018 03:47:28 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:46.018 03:47:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:46.018 03:47:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:46.018 03:47:28 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:46.018 03:47:28 -- spdk/autobuild.sh@16 -- $ date -u 00:02:46.018 Sat Dec 7 03:47:28 AM UTC 2024 00:02:46.018 03:47:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:46.018 v25.01-pre-308-g42416bc2c 00:02:46.018 03:47:28 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:46.018 03:47:28 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:46.018 03:47:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.018 03:47:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.018 03:47:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.018 ************************************ 00:02:46.018 START TEST asan 00:02:46.018 ************************************ 00:02:46.018 using asan 00:02:46.018 03:47:28 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:46.018 00:02:46.018 real 0m0.000s 00:02:46.018 user 0m0.000s 00:02:46.018 sys 0m0.000s 00:02:46.018 03:47:28 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.018 03:47:28 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.018 ************************************ 00:02:46.018 END TEST asan 00:02:46.018 ************************************ 00:02:46.018 03:47:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:46.018 03:47:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:46.018 03:47:28 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:46.018 03:47:28 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:46.018 03:47:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.018 ************************************ 00:02:46.018 START TEST ubsan 00:02:46.018 ************************************ 00:02:46.018 using ubsan 00:02:46.018 ************************************ 00:02:46.018 END TEST ubsan 00:02:46.018 ************************************ 00:02:46.018 03:47:28 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:46.018 00:02:46.018 real 0m0.000s 00:02:46.018 user 0m0.000s 00:02:46.018 sys 0m0.000s 00:02:46.018 03:47:28 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:46.018 03:47:28 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:46.018 03:47:28 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:46.018 03:47:28 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:46.018 03:47:28 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:46.018 03:47:28 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:46.018 03:47:28 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:46.018 03:47:28 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:46.018 03:47:28 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:46.018 03:47:28 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:46.018 03:47:28 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:46.277 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:46.277 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.846 Using 'verbs' RDMA provider 00:03:02.674 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:20.775 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:20.775 Creating mk/config.mk...done. 00:03:20.775 Creating mk/cc.flags.mk...done. 00:03:20.775 Type 'make' to build. 00:03:20.775 03:48:01 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:20.775 03:48:01 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:20.775 03:48:01 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:20.775 03:48:01 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.775 ************************************ 00:03:20.775 START TEST make 00:03:20.775 ************************************ 00:03:20.775 03:48:01 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:20.775 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:20.775 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:20.775 meson setup builddir \ 00:03:20.775 -Dwith-libaio=enabled \ 00:03:20.775 -Dwith-liburing=enabled \ 00:03:20.775 -Dwith-libvfn=disabled \ 00:03:20.775 -Dwith-spdk=disabled \ 00:03:20.775 -Dexamples=false \ 00:03:20.775 -Dtests=false \ 00:03:20.775 -Dtools=false && \ 00:03:20.775 meson compile -C builddir && \ 00:03:20.775 cd -) 00:03:20.775 make[1]: Nothing to be done for 'all'. 00:03:21.712 The Meson build system 00:03:21.712 Version: 1.5.0 00:03:21.712 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:21.712 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:21.712 Build type: native build 00:03:21.712 Project name: xnvme 00:03:21.712 Project version: 0.7.5 00:03:21.712 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:21.712 C linker for the host machine: cc ld.bfd 2.40-14 00:03:21.712 Host machine cpu family: x86_64 00:03:21.712 Host machine cpu: x86_64 00:03:21.712 Message: host_machine.system: linux 00:03:21.712 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:21.712 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:21.712 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:21.712 Run-time dependency threads found: YES 00:03:21.712 Has header "setupapi.h" : NO 00:03:21.712 Has header "linux/blkzoned.h" : YES 00:03:21.712 Has header "linux/blkzoned.h" : YES (cached) 00:03:21.712 Has header "libaio.h" : YES 00:03:21.712 Library aio found: YES 00:03:21.712 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:21.712 Run-time dependency liburing found: YES 2.2 00:03:21.712 Dependency libvfn skipped: feature with-libvfn disabled 00:03:21.712 Found CMake: /usr/bin/cmake (3.27.7) 00:03:21.712 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:21.712 Subproject spdk : skipped: feature with-spdk disabled 00:03:21.712 Run-time dependency appleframeworks found: NO (tried framework) 00:03:21.712 Run-time dependency appleframeworks found: NO (tried framework) 00:03:21.712 Library rt found: YES 00:03:21.712 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:21.712 Configuring xnvme_config.h using configuration 00:03:21.712 Configuring xnvme.spec using configuration 00:03:21.712 Run-time dependency bash-completion found: YES 2.11 00:03:21.712 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:21.712 Program cp found: YES (/usr/bin/cp) 00:03:21.712 Build targets in project: 3 00:03:21.712 00:03:21.712 xnvme 0.7.5 00:03:21.712 00:03:21.712 Subprojects 00:03:21.712 spdk : NO Feature 'with-spdk' disabled 00:03:21.712 00:03:21.712 User defined options 00:03:21.712 examples : false 00:03:21.712 tests : false 00:03:21.712 tools : false 00:03:21.712 with-libaio : enabled 00:03:21.712 with-liburing: enabled 00:03:21.712 with-libvfn : disabled 00:03:21.712 with-spdk : disabled 00:03:21.712 00:03:21.712 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:22.281 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:22.281 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:22.281 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:22.281 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:22.281 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:22.281 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:22.281 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:22.281 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:22.281 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:22.281 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:22.281 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:22.281 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:22.281 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:22.281 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:22.281 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:22.538 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:22.539 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:22.539 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:22.539 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:22.539 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:22.539 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:22.539 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:22.539 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:22.539 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:22.539 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:22.539 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:22.539 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:22.539 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:22.539 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:22.539 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:22.539 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:22.539 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:22.539 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:22.539 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:22.539 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:22.539 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:22.539 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:22.539 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:22.539 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:22.539 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:22.539 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:22.539 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:22.539 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:22.539 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:22.539 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:22.539 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:22.539 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:22.539 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:22.539 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:22.539 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:22.539 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:22.797 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:22.797 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:22.797 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:22.797 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:22.797 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:22.797 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:22.797 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:22.797 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:22.797 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:22.797 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:22.797 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:22.797 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:22.797 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:22.797 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:22.797 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:22.797 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:22.797 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:22.797 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:22.797 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:23.055 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:23.055 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:23.055 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:23.055 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:23.313 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:23.313 [75/76] Linking static target lib/libxnvme.a 00:03:23.313 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:23.313 INFO: autodetecting backend as ninja 00:03:23.313 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:23.313 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:31.458 The Meson build system 00:03:31.458 Version: 1.5.0 00:03:31.458 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:31.458 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:31.458 Build type: native build 00:03:31.458 Program cat found: YES (/usr/bin/cat) 00:03:31.458 Project name: DPDK 00:03:31.458 Project version: 24.03.0 00:03:31.458 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:31.458 C linker for the host machine: cc ld.bfd 2.40-14 00:03:31.458 Host machine cpu family: x86_64 00:03:31.458 Host machine cpu: x86_64 00:03:31.458 Message: ## Building in Developer Mode ## 00:03:31.458 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:31.458 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:31.458 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:31.458 Program python3 found: YES (/usr/bin/python3) 00:03:31.458 Program cat found: YES (/usr/bin/cat) 00:03:31.458 Compiler for C supports arguments -march=native: YES 00:03:31.458 Checking for size of "void *" : 8 00:03:31.458 Checking for size of "void *" : 8 (cached) 00:03:31.458 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:31.458 Library m found: YES 00:03:31.458 Library numa found: YES 00:03:31.458 Has header "numaif.h" : YES 00:03:31.458 Library fdt found: NO 00:03:31.458 Library execinfo found: NO 00:03:31.458 Has header "execinfo.h" : YES 00:03:31.458 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:31.458 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:31.458 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:31.458 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:31.458 Run-time dependency openssl found: YES 3.1.1 00:03:31.458 Run-time dependency libpcap found: YES 1.10.4 00:03:31.458 Has header "pcap.h" with dependency libpcap: YES 00:03:31.458 Compiler for C supports arguments -Wcast-qual: YES 00:03:31.458 Compiler for C supports arguments -Wdeprecated: YES 00:03:31.458 Compiler for C supports arguments -Wformat: YES 00:03:31.458 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:31.458 Compiler for C supports arguments -Wformat-security: NO 00:03:31.458 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:31.458 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:31.458 Compiler for C supports arguments -Wnested-externs: YES 00:03:31.458 Compiler for C supports arguments -Wold-style-definition: YES 00:03:31.458 Compiler for C supports arguments -Wpointer-arith: YES 00:03:31.458 Compiler for C supports arguments -Wsign-compare: YES 00:03:31.458 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:31.458 Compiler for C supports arguments -Wundef: YES 00:03:31.458 Compiler for C supports arguments -Wwrite-strings: YES 00:03:31.458 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:31.458 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:31.458 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:31.458 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:31.458 Program objdump found: YES (/usr/bin/objdump) 00:03:31.458 Compiler for C supports arguments -mavx512f: YES 00:03:31.458 Checking if "AVX512 checking" compiles: YES 00:03:31.458 Fetching value of define "__SSE4_2__" : 1 00:03:31.458 Fetching value of define "__AES__" : 1 00:03:31.458 Fetching value of define "__AVX__" : 1 00:03:31.458 Fetching value of define "__AVX2__" : 1 00:03:31.458 Fetching value of define "__AVX512BW__" : 1 00:03:31.458 Fetching value of define "__AVX512CD__" : 1 00:03:31.458 Fetching value of define "__AVX512DQ__" : 1 00:03:31.458 Fetching value of define "__AVX512F__" : 1 00:03:31.458 Fetching value of define "__AVX512VL__" : 1 00:03:31.458 Fetching value of define "__PCLMUL__" : 1 00:03:31.458 Fetching value of define "__RDRND__" : 1 00:03:31.458 Fetching value of define "__RDSEED__" : 1 00:03:31.458 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:31.458 Fetching value of define "__znver1__" : (undefined) 00:03:31.458 Fetching value of define "__znver2__" : (undefined) 00:03:31.458 Fetching value of define "__znver3__" : (undefined) 00:03:31.458 Fetching value of define "__znver4__" : (undefined) 00:03:31.458 Library asan found: YES 00:03:31.458 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:31.458 Message: lib/log: Defining dependency "log" 00:03:31.458 Message: lib/kvargs: Defining dependency "kvargs" 00:03:31.458 Message: lib/telemetry: Defining dependency "telemetry" 00:03:31.458 Library rt found: YES 00:03:31.458 Checking for function "getentropy" : NO 00:03:31.458 Message: lib/eal: Defining dependency "eal" 00:03:31.458 Message: lib/ring: Defining dependency "ring" 00:03:31.458 Message: lib/rcu: Defining dependency "rcu" 00:03:31.458 Message: lib/mempool: Defining dependency "mempool" 00:03:31.458 Message: lib/mbuf: Defining dependency "mbuf" 00:03:31.458 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:31.458 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:31.458 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:31.458 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:31.458 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:31.458 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:03:31.458 Compiler for C supports arguments -mpclmul: YES 00:03:31.458 Compiler for C supports arguments -maes: YES 00:03:31.458 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:31.458 Compiler for C supports arguments -mavx512bw: YES 00:03:31.458 Compiler for C supports arguments -mavx512dq: YES 00:03:31.458 Compiler for C supports arguments -mavx512vl: YES 00:03:31.458 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:31.458 Compiler for C supports arguments -mavx2: YES 00:03:31.458 Compiler for C supports arguments -mavx: YES 00:03:31.458 Message: lib/net: Defining dependency "net" 00:03:31.458 Message: lib/meter: Defining dependency "meter" 00:03:31.458 Message: lib/ethdev: Defining dependency "ethdev" 00:03:31.458 Message: lib/pci: Defining dependency "pci" 00:03:31.458 Message: lib/cmdline: Defining dependency "cmdline" 00:03:31.458 Message: lib/hash: Defining dependency "hash" 00:03:31.458 Message: lib/timer: Defining dependency "timer" 00:03:31.458 Message: lib/compressdev: Defining dependency "compressdev" 00:03:31.458 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:31.459 Message: lib/dmadev: Defining dependency "dmadev" 00:03:31.459 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:31.459 Message: lib/power: Defining dependency "power" 00:03:31.459 Message: lib/reorder: Defining dependency "reorder" 00:03:31.459 Message: lib/security: Defining dependency "security" 00:03:31.459 Has header "linux/userfaultfd.h" : YES 00:03:31.459 Has header "linux/vduse.h" : YES 00:03:31.459 Message: lib/vhost: Defining dependency "vhost" 00:03:31.459 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:31.459 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:31.459 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:31.459 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:31.459 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:31.459 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:31.459 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:31.459 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:31.459 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:31.459 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:31.459 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:31.459 Configuring doxy-api-html.conf using configuration 00:03:31.459 Configuring doxy-api-man.conf using configuration 00:03:31.459 Program mandb found: YES (/usr/bin/mandb) 00:03:31.459 Program sphinx-build found: NO 00:03:31.459 Configuring rte_build_config.h using configuration 00:03:31.459 Message: 00:03:31.459 ================= 00:03:31.459 Applications Enabled 00:03:31.459 ================= 00:03:31.459 00:03:31.459 apps: 00:03:31.459 00:03:31.459 00:03:31.459 Message: 00:03:31.459 ================= 00:03:31.459 Libraries Enabled 00:03:31.459 ================= 00:03:31.459 00:03:31.459 libs: 00:03:31.459 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:31.459 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:31.459 cryptodev, dmadev, power, reorder, security, vhost, 00:03:31.459 00:03:31.459 Message: 00:03:31.459 =============== 00:03:31.459 Drivers Enabled 00:03:31.459 =============== 00:03:31.459 00:03:31.459 common: 00:03:31.459 00:03:31.459 bus: 00:03:31.459 pci, vdev, 00:03:31.459 mempool: 00:03:31.459 ring, 00:03:31.459 dma: 00:03:31.459 00:03:31.459 net: 00:03:31.459 00:03:31.459 crypto: 00:03:31.459 00:03:31.459 compress: 00:03:31.459 00:03:31.459 vdpa: 00:03:31.459 00:03:31.459 00:03:31.459 Message: 00:03:31.459 ================= 00:03:31.459 Content Skipped 00:03:31.459 ================= 00:03:31.459 00:03:31.459 apps: 00:03:31.459 dumpcap: explicitly disabled via build config 00:03:31.459 graph: explicitly disabled via build config 00:03:31.459 pdump: explicitly disabled via build config 00:03:31.459 proc-info: explicitly disabled via build config 00:03:31.459 test-acl: explicitly disabled via build config 00:03:31.459 test-bbdev: explicitly disabled via build config 00:03:31.459 test-cmdline: explicitly disabled via build config 00:03:31.459 test-compress-perf: explicitly disabled via build config 00:03:31.459 test-crypto-perf: explicitly disabled via build config 00:03:31.459 test-dma-perf: explicitly disabled via build config 00:03:31.459 test-eventdev: explicitly disabled via build config 00:03:31.459 test-fib: explicitly disabled via build config 00:03:31.459 test-flow-perf: explicitly disabled via build config 00:03:31.459 test-gpudev: explicitly disabled via build config 00:03:31.459 test-mldev: explicitly disabled via build config 00:03:31.459 test-pipeline: explicitly disabled via build config 00:03:31.459 test-pmd: explicitly disabled via build config 00:03:31.459 test-regex: explicitly disabled via build config 00:03:31.459 test-sad: explicitly disabled via build config 00:03:31.459 test-security-perf: explicitly disabled via build config 00:03:31.459 00:03:31.459 libs: 00:03:31.459 argparse: explicitly disabled via build config 00:03:31.459 metrics: explicitly disabled via build config 00:03:31.459 acl: explicitly disabled via build config 00:03:31.459 bbdev: explicitly disabled via build config 00:03:31.459 bitratestats: explicitly disabled via build config 00:03:31.459 bpf: explicitly disabled via build config 00:03:31.459 cfgfile: explicitly disabled via build config 00:03:31.459 distributor: explicitly disabled via build config 00:03:31.459 efd: explicitly disabled via build config 00:03:31.459 eventdev: explicitly disabled via build config 00:03:31.459 dispatcher: explicitly disabled via build config 00:03:31.459 gpudev: explicitly disabled via build config 00:03:31.459 gro: explicitly disabled via build config 00:03:31.459 gso: explicitly disabled via build config 00:03:31.459 ip_frag: explicitly disabled via build config 00:03:31.459 jobstats: explicitly disabled via build config 00:03:31.459 latencystats: explicitly disabled via build config 00:03:31.459 lpm: explicitly disabled via build config 00:03:31.459 member: explicitly disabled via build config 00:03:31.459 pcapng: explicitly disabled via build config 00:03:31.459 rawdev: explicitly disabled via build config 00:03:31.459 regexdev: explicitly disabled via build config 00:03:31.459 mldev: explicitly disabled via build config 00:03:31.459 rib: explicitly disabled via build config 00:03:31.459 sched: explicitly disabled via build config 00:03:31.459 stack: explicitly disabled via build config 00:03:31.459 ipsec: explicitly disabled via build config 00:03:31.459 pdcp: explicitly disabled via build config 00:03:31.459 fib: explicitly disabled via build config 00:03:31.459 port: explicitly disabled via build config 00:03:31.459 pdump: explicitly disabled via build config 00:03:31.459 table: explicitly disabled via build config 00:03:31.459 pipeline: explicitly disabled via build config 00:03:31.459 graph: explicitly disabled via build config 00:03:31.459 node: explicitly disabled via build config 00:03:31.459 00:03:31.459 drivers: 00:03:31.459 common/cpt: not in enabled drivers build config 00:03:31.459 common/dpaax: not in enabled drivers build config 00:03:31.459 common/iavf: not in enabled drivers build config 00:03:31.459 common/idpf: not in enabled drivers build config 00:03:31.459 common/ionic: not in enabled drivers build config 00:03:31.459 common/mvep: not in enabled drivers build config 00:03:31.459 common/octeontx: not in enabled drivers build config 00:03:31.459 bus/auxiliary: not in enabled drivers build config 00:03:31.459 bus/cdx: not in enabled drivers build config 00:03:31.459 bus/dpaa: not in enabled drivers build config 00:03:31.459 bus/fslmc: not in enabled drivers build config 00:03:31.459 bus/ifpga: not in enabled drivers build config 00:03:31.459 bus/platform: not in enabled drivers build config 00:03:31.459 bus/uacce: not in enabled drivers build config 00:03:31.459 bus/vmbus: not in enabled drivers build config 00:03:31.459 common/cnxk: not in enabled drivers build config 00:03:31.459 common/mlx5: not in enabled drivers build config 00:03:31.459 common/nfp: not in enabled drivers build config 00:03:31.459 common/nitrox: not in enabled drivers build config 00:03:31.459 common/qat: not in enabled drivers build config 00:03:31.459 common/sfc_efx: not in enabled drivers build config 00:03:31.459 mempool/bucket: not in enabled drivers build config 00:03:31.459 mempool/cnxk: not in enabled drivers build config 00:03:31.459 mempool/dpaa: not in enabled drivers build config 00:03:31.459 mempool/dpaa2: not in enabled drivers build config 00:03:31.459 mempool/octeontx: not in enabled drivers build config 00:03:31.459 mempool/stack: not in enabled drivers build config 00:03:31.459 dma/cnxk: not in enabled drivers build config 00:03:31.459 dma/dpaa: not in enabled drivers build config 00:03:31.459 dma/dpaa2: not in enabled drivers build config 00:03:31.459 dma/hisilicon: not in enabled drivers build config 00:03:31.459 dma/idxd: not in enabled drivers build config 00:03:31.459 dma/ioat: not in enabled drivers build config 00:03:31.459 dma/skeleton: not in enabled drivers build config 00:03:31.459 net/af_packet: not in enabled drivers build config 00:03:31.459 net/af_xdp: not in enabled drivers build config 00:03:31.459 net/ark: not in enabled drivers build config 00:03:31.459 net/atlantic: not in enabled drivers build config 00:03:31.459 net/avp: not in enabled drivers build config 00:03:31.459 net/axgbe: not in enabled drivers build config 00:03:31.459 net/bnx2x: not in enabled drivers build config 00:03:31.459 net/bnxt: not in enabled drivers build config 00:03:31.459 net/bonding: not in enabled drivers build config 00:03:31.459 net/cnxk: not in enabled drivers build config 00:03:31.459 net/cpfl: not in enabled drivers build config 00:03:31.459 net/cxgbe: not in enabled drivers build config 00:03:31.459 net/dpaa: not in enabled drivers build config 00:03:31.459 net/dpaa2: not in enabled drivers build config 00:03:31.459 net/e1000: not in enabled drivers build config 00:03:31.459 net/ena: not in enabled drivers build config 00:03:31.459 net/enetc: not in enabled drivers build config 00:03:31.459 net/enetfec: not in enabled drivers build config 00:03:31.459 net/enic: not in enabled drivers build config 00:03:31.459 net/failsafe: not in enabled drivers build config 00:03:31.459 net/fm10k: not in enabled drivers build config 00:03:31.459 net/gve: not in enabled drivers build config 00:03:31.459 net/hinic: not in enabled drivers build config 00:03:31.459 net/hns3: not in enabled drivers build config 00:03:31.459 net/i40e: not in enabled drivers build config 00:03:31.459 net/iavf: not in enabled drivers build config 00:03:31.459 net/ice: not in enabled drivers build config 00:03:31.459 net/idpf: not in enabled drivers build config 00:03:31.459 net/igc: not in enabled drivers build config 00:03:31.459 net/ionic: not in enabled drivers build config 00:03:31.459 net/ipn3ke: not in enabled drivers build config 00:03:31.459 net/ixgbe: not in enabled drivers build config 00:03:31.459 net/mana: not in enabled drivers build config 00:03:31.459 net/memif: not in enabled drivers build config 00:03:31.459 net/mlx4: not in enabled drivers build config 00:03:31.459 net/mlx5: not in enabled drivers build config 00:03:31.459 net/mvneta: not in enabled drivers build config 00:03:31.459 net/mvpp2: not in enabled drivers build config 00:03:31.459 net/netvsc: not in enabled drivers build config 00:03:31.459 net/nfb: not in enabled drivers build config 00:03:31.459 net/nfp: not in enabled drivers build config 00:03:31.460 net/ngbe: not in enabled drivers build config 00:03:31.460 net/null: not in enabled drivers build config 00:03:31.460 net/octeontx: not in enabled drivers build config 00:03:31.460 net/octeon_ep: not in enabled drivers build config 00:03:31.460 net/pcap: not in enabled drivers build config 00:03:31.460 net/pfe: not in enabled drivers build config 00:03:31.460 net/qede: not in enabled drivers build config 00:03:31.460 net/ring: not in enabled drivers build config 00:03:31.460 net/sfc: not in enabled drivers build config 00:03:31.460 net/softnic: not in enabled drivers build config 00:03:31.460 net/tap: not in enabled drivers build config 00:03:31.460 net/thunderx: not in enabled drivers build config 00:03:31.460 net/txgbe: not in enabled drivers build config 00:03:31.460 net/vdev_netvsc: not in enabled drivers build config 00:03:31.460 net/vhost: not in enabled drivers build config 00:03:31.460 net/virtio: not in enabled drivers build config 00:03:31.460 net/vmxnet3: not in enabled drivers build config 00:03:31.460 raw/*: missing internal dependency, "rawdev" 00:03:31.460 crypto/armv8: not in enabled drivers build config 00:03:31.460 crypto/bcmfs: not in enabled drivers build config 00:03:31.460 crypto/caam_jr: not in enabled drivers build config 00:03:31.460 crypto/ccp: not in enabled drivers build config 00:03:31.460 crypto/cnxk: not in enabled drivers build config 00:03:31.460 crypto/dpaa_sec: not in enabled drivers build config 00:03:31.460 crypto/dpaa2_sec: not in enabled drivers build config 00:03:31.460 crypto/ipsec_mb: not in enabled drivers build config 00:03:31.460 crypto/mlx5: not in enabled drivers build config 00:03:31.460 crypto/mvsam: not in enabled drivers build config 00:03:31.460 crypto/nitrox: not in enabled drivers build config 00:03:31.460 crypto/null: not in enabled drivers build config 00:03:31.460 crypto/octeontx: not in enabled drivers build config 00:03:31.460 crypto/openssl: not in enabled drivers build config 00:03:31.460 crypto/scheduler: not in enabled drivers build config 00:03:31.460 crypto/uadk: not in enabled drivers build config 00:03:31.460 crypto/virtio: not in enabled drivers build config 00:03:31.460 compress/isal: not in enabled drivers build config 00:03:31.460 compress/mlx5: not in enabled drivers build config 00:03:31.460 compress/nitrox: not in enabled drivers build config 00:03:31.460 compress/octeontx: not in enabled drivers build config 00:03:31.460 compress/zlib: not in enabled drivers build config 00:03:31.460 regex/*: missing internal dependency, "regexdev" 00:03:31.460 ml/*: missing internal dependency, "mldev" 00:03:31.460 vdpa/ifc: not in enabled drivers build config 00:03:31.460 vdpa/mlx5: not in enabled drivers build config 00:03:31.460 vdpa/nfp: not in enabled drivers build config 00:03:31.460 vdpa/sfc: not in enabled drivers build config 00:03:31.460 event/*: missing internal dependency, "eventdev" 00:03:31.460 baseband/*: missing internal dependency, "bbdev" 00:03:31.460 gpu/*: missing internal dependency, "gpudev" 00:03:31.460 00:03:31.460 00:03:31.460 Build targets in project: 85 00:03:31.460 00:03:31.460 DPDK 24.03.0 00:03:31.460 00:03:31.460 User defined options 00:03:31.460 buildtype : debug 00:03:31.460 default_library : shared 00:03:31.460 libdir : lib 00:03:31.460 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:31.460 b_sanitize : address 00:03:31.460 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:31.460 c_link_args : 00:03:31.460 cpu_instruction_set: native 00:03:31.460 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:31.460 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:31.460 enable_docs : false 00:03:31.460 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:31.460 enable_kmods : false 00:03:31.460 max_lcores : 128 00:03:31.460 tests : false 00:03:31.460 00:03:31.460 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:31.460 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:31.460 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:31.460 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:31.460 [3/268] Linking static target lib/librte_kvargs.a 00:03:31.460 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:31.460 [5/268] Linking static target lib/librte_log.a 00:03:31.460 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:31.720 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:31.720 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:31.720 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:31.720 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:31.720 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:31.720 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:31.720 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:31.720 [14/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.720 [15/268] Linking static target lib/librte_telemetry.a 00:03:31.720 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:31.720 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:31.979 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:32.239 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.239 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:32.239 [21/268] Linking target lib/librte_log.so.24.1 00:03:32.239 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:32.239 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:32.239 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:32.239 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:32.498 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:32.498 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:32.498 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:32.498 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:32.498 [30/268] Linking target lib/librte_kvargs.so.24.1 00:03:32.498 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:32.757 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:32.757 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:32.757 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:32.757 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:33.015 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:33.015 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:33.015 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:33.015 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:33.015 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:33.015 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:33.015 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:33.015 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:33.015 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:33.274 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:33.274 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:33.274 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:33.274 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:33.275 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:33.534 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:33.534 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:33.534 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:33.793 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:33.793 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:33.793 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:33.793 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:33.793 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:34.052 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:34.052 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:34.052 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:34.052 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:34.052 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:34.052 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:34.312 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:34.312 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:34.312 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:34.571 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:34.571 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:34.571 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:34.571 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:34.830 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:34.830 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:34.830 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:34.830 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:34.830 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:34.830 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:34.830 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:34.830 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:34.830 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:35.089 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:35.089 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:35.089 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:35.089 [83/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:35.089 [84/268] Linking static target lib/librte_ring.a 00:03:35.348 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:35.348 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:35.348 [87/268] Linking static target lib/librte_eal.a 00:03:35.348 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:35.607 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:35.607 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:35.607 [91/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:35.607 [92/268] Linking static target lib/librte_rcu.a 00:03:35.607 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:35.607 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:35.607 [95/268] Linking static target lib/librte_mempool.a 00:03:35.607 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:35.866 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:35.866 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:35.866 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:35.866 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:36.129 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:36.129 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:36.129 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.129 [104/268] Linking static target lib/librte_mbuf.a 00:03:36.129 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:36.129 [106/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:36.129 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:36.129 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:36.389 [109/268] Linking static target lib/librte_net.a 00:03:36.389 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:36.389 [111/268] Linking static target lib/librte_meter.a 00:03:36.647 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:36.647 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:36.647 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:36.647 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:36.647 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.906 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.906 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.166 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:37.166 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:37.166 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.425 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:37.685 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:37.685 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:37.685 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:37.685 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:37.685 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:37.685 [128/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:37.685 [129/268] Linking static target lib/librte_pci.a 00:03:37.685 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:37.685 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:37.944 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:37.944 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:37.944 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:37.944 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:37.944 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:37.944 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:37.944 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:37.944 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:37.944 [140/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:38.203 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:38.203 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:38.203 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:38.203 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:38.203 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:38.203 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:38.203 [147/268] Linking static target lib/librte_cmdline.a 00:03:38.203 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:38.462 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:38.722 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:38.722 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:38.722 [152/268] Linking static target lib/librte_timer.a 00:03:38.722 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:38.722 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:38.722 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:38.981 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:39.287 [157/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:39.287 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:39.287 [159/268] Linking static target lib/librte_compressdev.a 00:03:39.287 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.287 [161/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:39.287 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:39.287 [163/268] Linking static target lib/librte_hash.a 00:03:39.287 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:39.546 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:39.547 [166/268] Linking static target lib/librte_dmadev.a 00:03:39.547 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:39.547 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:39.547 [169/268] Linking static target lib/librte_ethdev.a 00:03:39.806 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:39.806 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:39.806 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:39.806 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:39.806 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:40.374 [175/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:40.374 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:40.374 [177/268] Linking static target lib/librte_cryptodev.a 00:03:40.374 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:40.374 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:40.374 [180/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:40.374 [181/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.374 [182/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.374 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:40.633 [184/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:40.634 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:40.634 [186/268] Linking static target lib/librte_power.a 00:03:40.893 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:40.893 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:40.893 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:40.893 [190/268] Linking static target lib/librte_reorder.a 00:03:41.153 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:41.412 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:41.412 [193/268] Linking static target lib/librte_security.a 00:03:41.673 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:41.673 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.932 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:41.932 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:41.932 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:42.192 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.192 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:42.192 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:42.475 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:42.475 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:42.475 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:42.737 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:42.737 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:42.737 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:42.737 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:42.737 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:42.737 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:42.737 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.995 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:42.995 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:42.995 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.995 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:42.995 [216/268] Linking static target drivers/librte_bus_pci.a 00:03:42.995 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:42.995 [218/268] Linking static target drivers/librte_bus_vdev.a 00:03:42.995 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:43.253 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:43.253 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:43.253 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:43.512 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:43.512 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:43.512 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:43.512 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.512 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.079 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:47.365 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:47.366 [230/268] Linking static target lib/librte_vhost.a 00:03:48.304 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.304 [232/268] Linking target lib/librte_eal.so.24.1 00:03:48.304 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:48.304 [234/268] Linking target lib/librte_pci.so.24.1 00:03:48.304 [235/268] Linking target lib/librte_meter.so.24.1 00:03:48.304 [236/268] Linking target lib/librte_ring.so.24.1 00:03:48.304 [237/268] Linking target lib/librte_timer.so.24.1 00:03:48.304 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:48.304 [239/268] Linking target lib/librte_dmadev.so.24.1 00:03:48.563 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:48.563 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:48.563 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:48.563 [243/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:48.563 [244/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:48.563 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:48.563 [246/268] Linking target lib/librte_rcu.so.24.1 00:03:48.563 [247/268] Linking target lib/librte_mempool.so.24.1 00:03:48.563 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:48.563 [249/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:48.823 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:48.823 [251/268] Linking target lib/librte_mbuf.so.24.1 00:03:48.823 [252/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:48.823 [253/268] Linking target lib/librte_net.so.24.1 00:03:48.823 [254/268] Linking target lib/librte_reorder.so.24.1 00:03:48.823 [255/268] Linking target lib/librte_compressdev.so.24.1 00:03:48.823 [256/268] Linking target lib/librte_cryptodev.so.24.1 00:03:49.082 [257/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.082 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:49.082 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:49.082 [260/268] Linking target lib/librte_cmdline.so.24.1 00:03:49.082 [261/268] Linking target lib/librte_hash.so.24.1 00:03:49.082 [262/268] Linking target lib/librte_security.so.24.1 00:03:49.082 [263/268] Linking target lib/librte_ethdev.so.24.1 00:03:49.082 [264/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:49.082 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:49.082 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:49.341 [267/268] Linking target lib/librte_power.so.24.1 00:03:49.341 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:49.341 INFO: autodetecting backend as ninja 00:03:49.341 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:11.275 CC lib/ut/ut.o 00:04:11.275 CC lib/log/log.o 00:04:11.275 CC lib/log/log_flags.o 00:04:11.275 CC lib/log/log_deprecated.o 00:04:11.275 CC lib/ut_mock/mock.o 00:04:11.275 LIB libspdk_ut.a 00:04:11.275 LIB libspdk_ut_mock.a 00:04:11.275 LIB libspdk_log.a 00:04:11.275 SO libspdk_ut.so.2.0 00:04:11.275 SO libspdk_ut_mock.so.6.0 00:04:11.275 SO libspdk_log.so.7.1 00:04:11.275 SYMLINK libspdk_ut.so 00:04:11.275 SYMLINK libspdk_ut_mock.so 00:04:11.275 SYMLINK libspdk_log.so 00:04:11.275 CC lib/ioat/ioat.o 00:04:11.275 CC lib/util/cpuset.o 00:04:11.275 CC lib/util/bit_array.o 00:04:11.275 CC lib/util/base64.o 00:04:11.275 CC lib/util/crc32c.o 00:04:11.275 CC lib/util/crc32.o 00:04:11.275 CC lib/util/crc16.o 00:04:11.275 CC lib/dma/dma.o 00:04:11.275 CXX lib/trace_parser/trace.o 00:04:11.275 CC lib/vfio_user/host/vfio_user_pci.o 00:04:11.275 CC lib/util/crc32_ieee.o 00:04:11.275 CC lib/vfio_user/host/vfio_user.o 00:04:11.275 CC lib/util/crc64.o 00:04:11.275 CC lib/util/dif.o 00:04:11.275 LIB libspdk_dma.a 00:04:11.275 CC lib/util/fd.o 00:04:11.275 SO libspdk_dma.so.5.0 00:04:11.275 CC lib/util/fd_group.o 00:04:11.275 CC lib/util/file.o 00:04:11.275 CC lib/util/hexlify.o 00:04:11.275 SYMLINK libspdk_dma.so 00:04:11.275 CC lib/util/iov.o 00:04:11.275 LIB libspdk_ioat.a 00:04:11.275 SO libspdk_ioat.so.7.0 00:04:11.275 CC lib/util/math.o 00:04:11.275 CC lib/util/net.o 00:04:11.275 LIB libspdk_vfio_user.a 00:04:11.275 SYMLINK libspdk_ioat.so 00:04:11.275 CC lib/util/pipe.o 00:04:11.275 SO libspdk_vfio_user.so.5.0 00:04:11.275 CC lib/util/strerror_tls.o 00:04:11.275 CC lib/util/string.o 00:04:11.275 SYMLINK libspdk_vfio_user.so 00:04:11.275 CC lib/util/uuid.o 00:04:11.275 CC lib/util/xor.o 00:04:11.275 CC lib/util/zipf.o 00:04:11.275 CC lib/util/md5.o 00:04:11.275 LIB libspdk_util.a 00:04:11.275 SO libspdk_util.so.10.1 00:04:11.275 LIB libspdk_trace_parser.a 00:04:11.275 SO libspdk_trace_parser.so.6.0 00:04:11.275 SYMLINK libspdk_util.so 00:04:11.275 SYMLINK libspdk_trace_parser.so 00:04:11.275 CC lib/idxd/idxd.o 00:04:11.275 CC lib/idxd/idxd_user.o 00:04:11.275 CC lib/vmd/vmd.o 00:04:11.275 CC lib/idxd/idxd_kernel.o 00:04:11.275 CC lib/vmd/led.o 00:04:11.275 CC lib/rdma_utils/rdma_utils.o 00:04:11.275 CC lib/env_dpdk/env.o 00:04:11.275 CC lib/env_dpdk/memory.o 00:04:11.275 CC lib/conf/conf.o 00:04:11.275 CC lib/json/json_parse.o 00:04:11.275 CC lib/json/json_util.o 00:04:11.275 CC lib/json/json_write.o 00:04:11.275 CC lib/env_dpdk/pci.o 00:04:11.275 CC lib/env_dpdk/init.o 00:04:11.275 LIB libspdk_conf.a 00:04:11.275 LIB libspdk_rdma_utils.a 00:04:11.275 SO libspdk_conf.so.6.0 00:04:11.275 SO libspdk_rdma_utils.so.1.0 00:04:11.275 SYMLINK libspdk_conf.so 00:04:11.275 SYMLINK libspdk_rdma_utils.so 00:04:11.275 CC lib/env_dpdk/threads.o 00:04:11.275 CC lib/env_dpdk/pci_ioat.o 00:04:11.275 CC lib/env_dpdk/pci_virtio.o 00:04:11.275 LIB libspdk_json.a 00:04:11.275 SO libspdk_json.so.6.0 00:04:11.275 CC lib/env_dpdk/pci_vmd.o 00:04:11.275 CC lib/env_dpdk/pci_idxd.o 00:04:11.275 SYMLINK libspdk_json.so 00:04:11.275 CC lib/env_dpdk/pci_event.o 00:04:11.275 CC lib/env_dpdk/sigbus_handler.o 00:04:11.275 CC lib/env_dpdk/pci_dpdk.o 00:04:11.275 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:11.275 LIB libspdk_idxd.a 00:04:11.275 CC lib/rdma_provider/common.o 00:04:11.275 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:11.275 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:11.275 SO libspdk_idxd.so.12.1 00:04:11.275 LIB libspdk_vmd.a 00:04:11.275 SO libspdk_vmd.so.6.0 00:04:11.275 SYMLINK libspdk_idxd.so 00:04:11.275 CC lib/jsonrpc/jsonrpc_server.o 00:04:11.275 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:11.275 CC lib/jsonrpc/jsonrpc_client.o 00:04:11.275 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:11.275 SYMLINK libspdk_vmd.so 00:04:11.275 LIB libspdk_rdma_provider.a 00:04:11.275 SO libspdk_rdma_provider.so.7.0 00:04:11.536 SYMLINK libspdk_rdma_provider.so 00:04:11.536 LIB libspdk_jsonrpc.a 00:04:11.536 SO libspdk_jsonrpc.so.6.0 00:04:11.536 SYMLINK libspdk_jsonrpc.so 00:04:12.104 LIB libspdk_env_dpdk.a 00:04:12.105 CC lib/rpc/rpc.o 00:04:12.105 SO libspdk_env_dpdk.so.15.1 00:04:12.363 SYMLINK libspdk_env_dpdk.so 00:04:12.363 LIB libspdk_rpc.a 00:04:12.363 SO libspdk_rpc.so.6.0 00:04:12.363 SYMLINK libspdk_rpc.so 00:04:12.930 CC lib/keyring/keyring.o 00:04:12.930 CC lib/keyring/keyring_rpc.o 00:04:12.930 CC lib/notify/notify.o 00:04:12.930 CC lib/notify/notify_rpc.o 00:04:12.930 CC lib/trace/trace.o 00:04:12.930 CC lib/trace/trace_flags.o 00:04:12.930 CC lib/trace/trace_rpc.o 00:04:12.930 LIB libspdk_notify.a 00:04:12.930 SO libspdk_notify.so.6.0 00:04:12.930 LIB libspdk_keyring.a 00:04:13.189 LIB libspdk_trace.a 00:04:13.189 SYMLINK libspdk_notify.so 00:04:13.189 SO libspdk_keyring.so.2.0 00:04:13.189 SO libspdk_trace.so.11.0 00:04:13.189 SYMLINK libspdk_keyring.so 00:04:13.189 SYMLINK libspdk_trace.so 00:04:13.758 CC lib/sock/sock.o 00:04:13.758 CC lib/sock/sock_rpc.o 00:04:13.758 CC lib/thread/thread.o 00:04:13.758 CC lib/thread/iobuf.o 00:04:14.018 LIB libspdk_sock.a 00:04:14.018 SO libspdk_sock.so.10.0 00:04:14.277 SYMLINK libspdk_sock.so 00:04:14.535 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:14.535 CC lib/nvme/nvme_ctrlr.o 00:04:14.535 CC lib/nvme/nvme_fabric.o 00:04:14.535 CC lib/nvme/nvme_ns_cmd.o 00:04:14.535 CC lib/nvme/nvme_ns.o 00:04:14.535 CC lib/nvme/nvme_pcie_common.o 00:04:14.535 CC lib/nvme/nvme_qpair.o 00:04:14.535 CC lib/nvme/nvme.o 00:04:14.535 CC lib/nvme/nvme_pcie.o 00:04:15.471 CC lib/nvme/nvme_quirks.o 00:04:15.471 CC lib/nvme/nvme_transport.o 00:04:15.471 CC lib/nvme/nvme_discovery.o 00:04:15.471 LIB libspdk_thread.a 00:04:15.471 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:15.471 SO libspdk_thread.so.11.0 00:04:15.471 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:15.471 SYMLINK libspdk_thread.so 00:04:15.471 CC lib/nvme/nvme_tcp.o 00:04:15.730 CC lib/accel/accel.o 00:04:15.730 CC lib/blob/blobstore.o 00:04:15.730 CC lib/blob/request.o 00:04:15.730 CC lib/accel/accel_rpc.o 00:04:15.730 CC lib/blob/zeroes.o 00:04:15.988 CC lib/accel/accel_sw.o 00:04:15.988 CC lib/nvme/nvme_opal.o 00:04:15.988 CC lib/nvme/nvme_io_msg.o 00:04:15.988 CC lib/nvme/nvme_poll_group.o 00:04:15.988 CC lib/blob/blob_bs_dev.o 00:04:15.988 CC lib/nvme/nvme_zns.o 00:04:16.247 CC lib/init/json_config.o 00:04:16.505 CC lib/virtio/virtio.o 00:04:16.505 CC lib/virtio/virtio_vhost_user.o 00:04:16.505 CC lib/virtio/virtio_vfio_user.o 00:04:16.505 CC lib/nvme/nvme_stubs.o 00:04:16.505 CC lib/init/subsystem.o 00:04:16.764 CC lib/virtio/virtio_pci.o 00:04:16.764 CC lib/fsdev/fsdev.o 00:04:16.764 CC lib/fsdev/fsdev_io.o 00:04:16.764 CC lib/init/subsystem_rpc.o 00:04:16.764 LIB libspdk_accel.a 00:04:16.764 CC lib/init/rpc.o 00:04:16.764 SO libspdk_accel.so.16.0 00:04:16.764 CC lib/nvme/nvme_auth.o 00:04:17.032 SYMLINK libspdk_accel.so 00:04:17.032 CC lib/fsdev/fsdev_rpc.o 00:04:17.032 LIB libspdk_init.a 00:04:17.032 LIB libspdk_virtio.a 00:04:17.032 SO libspdk_init.so.6.0 00:04:17.032 CC lib/nvme/nvme_cuse.o 00:04:17.032 SO libspdk_virtio.so.7.0 00:04:17.032 CC lib/nvme/nvme_rdma.o 00:04:17.032 SYMLINK libspdk_init.so 00:04:17.032 CC lib/bdev/bdev_rpc.o 00:04:17.032 CC lib/bdev/bdev.o 00:04:17.032 SYMLINK libspdk_virtio.so 00:04:17.032 CC lib/bdev/bdev_zone.o 00:04:17.032 CC lib/bdev/part.o 00:04:17.328 CC lib/bdev/scsi_nvme.o 00:04:17.328 CC lib/event/app.o 00:04:17.328 LIB libspdk_fsdev.a 00:04:17.587 CC lib/event/reactor.o 00:04:17.587 SO libspdk_fsdev.so.2.0 00:04:17.587 CC lib/event/log_rpc.o 00:04:17.587 CC lib/event/app_rpc.o 00:04:17.587 SYMLINK libspdk_fsdev.so 00:04:17.587 CC lib/event/scheduler_static.o 00:04:17.846 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:17.846 LIB libspdk_event.a 00:04:18.106 SO libspdk_event.so.14.0 00:04:18.106 SYMLINK libspdk_event.so 00:04:18.675 LIB libspdk_fuse_dispatcher.a 00:04:18.675 LIB libspdk_nvme.a 00:04:18.675 SO libspdk_fuse_dispatcher.so.1.0 00:04:18.675 SYMLINK libspdk_fuse_dispatcher.so 00:04:18.934 SO libspdk_nvme.so.15.0 00:04:19.192 SYMLINK libspdk_nvme.so 00:04:19.192 LIB libspdk_blob.a 00:04:19.451 SO libspdk_blob.so.12.0 00:04:19.451 SYMLINK libspdk_blob.so 00:04:20.017 CC lib/blobfs/blobfs.o 00:04:20.017 CC lib/blobfs/tree.o 00:04:20.017 CC lib/lvol/lvol.o 00:04:20.017 LIB libspdk_bdev.a 00:04:20.276 SO libspdk_bdev.so.17.0 00:04:20.276 SYMLINK libspdk_bdev.so 00:04:20.534 CC lib/nvmf/ctrlr.o 00:04:20.534 CC lib/nvmf/subsystem.o 00:04:20.534 CC lib/nvmf/ctrlr_bdev.o 00:04:20.534 CC lib/nvmf/ctrlr_discovery.o 00:04:20.534 CC lib/scsi/dev.o 00:04:20.534 CC lib/ublk/ublk.o 00:04:20.534 CC lib/nbd/nbd.o 00:04:20.534 CC lib/ftl/ftl_core.o 00:04:20.791 CC lib/scsi/lun.o 00:04:20.791 LIB libspdk_blobfs.a 00:04:20.791 SO libspdk_blobfs.so.11.0 00:04:21.049 SYMLINK libspdk_blobfs.so 00:04:21.049 CC lib/ftl/ftl_init.o 00:04:21.049 LIB libspdk_lvol.a 00:04:21.049 CC lib/nvmf/nvmf.o 00:04:21.049 SO libspdk_lvol.so.11.0 00:04:21.049 CC lib/nbd/nbd_rpc.o 00:04:21.049 CC lib/ftl/ftl_layout.o 00:04:21.049 SYMLINK libspdk_lvol.so 00:04:21.049 CC lib/ftl/ftl_debug.o 00:04:21.049 CC lib/scsi/port.o 00:04:21.049 CC lib/ublk/ublk_rpc.o 00:04:21.049 LIB libspdk_nbd.a 00:04:21.306 SO libspdk_nbd.so.7.0 00:04:21.306 CC lib/scsi/scsi.o 00:04:21.306 CC lib/scsi/scsi_bdev.o 00:04:21.306 SYMLINK libspdk_nbd.so 00:04:21.306 CC lib/scsi/scsi_pr.o 00:04:21.306 CC lib/scsi/scsi_rpc.o 00:04:21.306 LIB libspdk_ublk.a 00:04:21.306 CC lib/scsi/task.o 00:04:21.306 SO libspdk_ublk.so.3.0 00:04:21.306 CC lib/ftl/ftl_io.o 00:04:21.306 CC lib/nvmf/nvmf_rpc.o 00:04:21.306 SYMLINK libspdk_ublk.so 00:04:21.306 CC lib/nvmf/transport.o 00:04:21.306 CC lib/nvmf/tcp.o 00:04:21.563 CC lib/nvmf/stubs.o 00:04:21.563 CC lib/ftl/ftl_sb.o 00:04:21.563 CC lib/ftl/ftl_l2p.o 00:04:21.820 LIB libspdk_scsi.a 00:04:21.820 SO libspdk_scsi.so.9.0 00:04:21.820 CC lib/ftl/ftl_l2p_flat.o 00:04:21.820 CC lib/ftl/ftl_nv_cache.o 00:04:21.820 SYMLINK libspdk_scsi.so 00:04:21.820 CC lib/ftl/ftl_band.o 00:04:21.820 CC lib/ftl/ftl_band_ops.o 00:04:21.820 CC lib/ftl/ftl_writer.o 00:04:22.077 CC lib/nvmf/mdns_server.o 00:04:22.077 CC lib/nvmf/rdma.o 00:04:22.077 CC lib/ftl/ftl_rq.o 00:04:22.335 CC lib/ftl/ftl_reloc.o 00:04:22.335 CC lib/ftl/ftl_l2p_cache.o 00:04:22.335 CC lib/iscsi/conn.o 00:04:22.335 CC lib/iscsi/init_grp.o 00:04:22.335 CC lib/vhost/vhost.o 00:04:22.335 CC lib/ftl/ftl_p2l.o 00:04:22.335 CC lib/ftl/ftl_p2l_log.o 00:04:22.592 CC lib/iscsi/iscsi.o 00:04:22.592 CC lib/nvmf/auth.o 00:04:22.850 CC lib/vhost/vhost_rpc.o 00:04:22.850 CC lib/vhost/vhost_scsi.o 00:04:22.850 CC lib/vhost/vhost_blk.o 00:04:22.850 CC lib/ftl/mngt/ftl_mngt.o 00:04:22.850 CC lib/iscsi/param.o 00:04:23.108 CC lib/iscsi/portal_grp.o 00:04:23.108 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:23.367 CC lib/vhost/rte_vhost_user.o 00:04:23.367 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:23.367 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:23.367 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:23.367 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:23.367 CC lib/iscsi/tgt_node.o 00:04:23.625 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:23.625 CC lib/iscsi/iscsi_subsystem.o 00:04:23.625 CC lib/iscsi/iscsi_rpc.o 00:04:23.625 CC lib/iscsi/task.o 00:04:23.625 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:23.625 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:23.883 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:23.883 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:23.883 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:23.883 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:23.883 CC lib/ftl/utils/ftl_conf.o 00:04:23.883 CC lib/ftl/utils/ftl_md.o 00:04:24.141 CC lib/ftl/utils/ftl_mempool.o 00:04:24.141 CC lib/ftl/utils/ftl_bitmap.o 00:04:24.141 CC lib/ftl/utils/ftl_property.o 00:04:24.141 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:24.141 LIB libspdk_iscsi.a 00:04:24.141 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:24.400 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:24.400 SO libspdk_iscsi.so.8.0 00:04:24.400 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:24.400 LIB libspdk_vhost.a 00:04:24.400 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:24.400 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:24.400 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:24.400 LIB libspdk_nvmf.a 00:04:24.400 SO libspdk_vhost.so.8.0 00:04:24.400 SYMLINK libspdk_iscsi.so 00:04:24.400 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:24.400 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:24.400 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:24.400 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:24.400 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:24.400 SYMLINK libspdk_vhost.so 00:04:24.659 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:24.659 SO libspdk_nvmf.so.20.0 00:04:24.659 CC lib/ftl/base/ftl_base_dev.o 00:04:24.659 CC lib/ftl/base/ftl_base_bdev.o 00:04:24.659 CC lib/ftl/ftl_trace.o 00:04:24.919 SYMLINK libspdk_nvmf.so 00:04:24.919 LIB libspdk_ftl.a 00:04:25.178 SO libspdk_ftl.so.9.0 00:04:25.438 SYMLINK libspdk_ftl.so 00:04:26.006 CC module/env_dpdk/env_dpdk_rpc.o 00:04:26.006 CC module/blob/bdev/blob_bdev.o 00:04:26.006 CC module/keyring/linux/keyring.o 00:04:26.006 CC module/sock/posix/posix.o 00:04:26.006 CC module/keyring/file/keyring.o 00:04:26.006 CC module/scheduler/gscheduler/gscheduler.o 00:04:26.006 CC module/fsdev/aio/fsdev_aio.o 00:04:26.006 CC module/accel/error/accel_error.o 00:04:26.006 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:26.006 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:26.006 LIB libspdk_env_dpdk_rpc.a 00:04:26.006 SO libspdk_env_dpdk_rpc.so.6.0 00:04:26.006 SYMLINK libspdk_env_dpdk_rpc.so 00:04:26.006 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:26.006 CC module/keyring/linux/keyring_rpc.o 00:04:26.006 CC module/keyring/file/keyring_rpc.o 00:04:26.286 LIB libspdk_scheduler_gscheduler.a 00:04:26.286 LIB libspdk_scheduler_dpdk_governor.a 00:04:26.286 SO libspdk_scheduler_gscheduler.so.4.0 00:04:26.286 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:26.286 CC module/accel/error/accel_error_rpc.o 00:04:26.286 LIB libspdk_scheduler_dynamic.a 00:04:26.286 SO libspdk_scheduler_dynamic.so.4.0 00:04:26.286 SYMLINK libspdk_scheduler_gscheduler.so 00:04:26.286 CC module/fsdev/aio/linux_aio_mgr.o 00:04:26.286 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:26.286 LIB libspdk_keyring_linux.a 00:04:26.286 LIB libspdk_blob_bdev.a 00:04:26.286 LIB libspdk_keyring_file.a 00:04:26.286 SYMLINK libspdk_scheduler_dynamic.so 00:04:26.286 SO libspdk_keyring_linux.so.1.0 00:04:26.286 SO libspdk_blob_bdev.so.12.0 00:04:26.286 SO libspdk_keyring_file.so.2.0 00:04:26.286 LIB libspdk_accel_error.a 00:04:26.286 SYMLINK libspdk_blob_bdev.so 00:04:26.286 SO libspdk_accel_error.so.2.0 00:04:26.286 SYMLINK libspdk_keyring_linux.so 00:04:26.286 SYMLINK libspdk_keyring_file.so 00:04:26.286 SYMLINK libspdk_accel_error.so 00:04:26.545 CC module/accel/ioat/accel_ioat.o 00:04:26.545 CC module/accel/ioat/accel_ioat_rpc.o 00:04:26.545 CC module/accel/dsa/accel_dsa.o 00:04:26.545 CC module/accel/dsa/accel_dsa_rpc.o 00:04:26.545 CC module/accel/iaa/accel_iaa.o 00:04:26.545 CC module/accel/iaa/accel_iaa_rpc.o 00:04:26.545 LIB libspdk_accel_ioat.a 00:04:26.545 CC module/bdev/delay/vbdev_delay.o 00:04:26.545 CC module/blobfs/bdev/blobfs_bdev.o 00:04:26.545 SO libspdk_accel_ioat.so.6.0 00:04:26.545 LIB libspdk_accel_iaa.a 00:04:26.805 SO libspdk_accel_iaa.so.3.0 00:04:26.805 LIB libspdk_fsdev_aio.a 00:04:26.805 CC module/bdev/error/vbdev_error.o 00:04:26.805 LIB libspdk_accel_dsa.a 00:04:26.805 CC module/bdev/lvol/vbdev_lvol.o 00:04:26.805 SYMLINK libspdk_accel_ioat.so 00:04:26.805 CC module/bdev/gpt/gpt.o 00:04:26.805 CC module/bdev/gpt/vbdev_gpt.o 00:04:26.805 LIB libspdk_sock_posix.a 00:04:26.805 SO libspdk_fsdev_aio.so.1.0 00:04:26.805 SO libspdk_accel_dsa.so.5.0 00:04:26.805 SYMLINK libspdk_accel_iaa.so 00:04:26.805 SO libspdk_sock_posix.so.6.0 00:04:26.805 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:26.805 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:26.805 SYMLINK libspdk_fsdev_aio.so 00:04:26.805 SYMLINK libspdk_accel_dsa.so 00:04:26.805 SYMLINK libspdk_sock_posix.so 00:04:27.064 LIB libspdk_blobfs_bdev.a 00:04:27.064 CC module/bdev/error/vbdev_error_rpc.o 00:04:27.064 LIB libspdk_bdev_gpt.a 00:04:27.064 SO libspdk_blobfs_bdev.so.6.0 00:04:27.064 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:27.064 CC module/bdev/malloc/bdev_malloc.o 00:04:27.064 CC module/bdev/nvme/bdev_nvme.o 00:04:27.064 SO libspdk_bdev_gpt.so.6.0 00:04:27.064 CC module/bdev/null/bdev_null.o 00:04:27.064 CC module/bdev/passthru/vbdev_passthru.o 00:04:27.064 SYMLINK libspdk_blobfs_bdev.so 00:04:27.064 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:27.064 SYMLINK libspdk_bdev_gpt.so 00:04:27.064 LIB libspdk_bdev_error.a 00:04:27.323 LIB libspdk_bdev_delay.a 00:04:27.323 SO libspdk_bdev_error.so.6.0 00:04:27.323 SO libspdk_bdev_delay.so.6.0 00:04:27.323 LIB libspdk_bdev_lvol.a 00:04:27.323 SYMLINK libspdk_bdev_error.so 00:04:27.323 CC module/bdev/null/bdev_null_rpc.o 00:04:27.323 CC module/bdev/raid/bdev_raid.o 00:04:27.323 SYMLINK libspdk_bdev_delay.so 00:04:27.323 CC module/bdev/raid/bdev_raid_rpc.o 00:04:27.323 SO libspdk_bdev_lvol.so.6.0 00:04:27.323 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:27.323 SYMLINK libspdk_bdev_lvol.so 00:04:27.323 CC module/bdev/raid/bdev_raid_sb.o 00:04:27.323 CC module/bdev/split/vbdev_split.o 00:04:27.323 LIB libspdk_bdev_passthru.a 00:04:27.323 CC module/bdev/split/vbdev_split_rpc.o 00:04:27.582 LIB libspdk_bdev_null.a 00:04:27.583 SO libspdk_bdev_passthru.so.6.0 00:04:27.583 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:27.583 SO libspdk_bdev_null.so.6.0 00:04:27.583 LIB libspdk_bdev_malloc.a 00:04:27.583 SYMLINK libspdk_bdev_passthru.so 00:04:27.583 SO libspdk_bdev_malloc.so.6.0 00:04:27.583 SYMLINK libspdk_bdev_null.so 00:04:27.583 CC module/bdev/raid/raid0.o 00:04:27.583 CC module/bdev/raid/raid1.o 00:04:27.583 SYMLINK libspdk_bdev_malloc.so 00:04:27.583 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:27.583 LIB libspdk_bdev_split.a 00:04:27.583 CC module/bdev/raid/concat.o 00:04:27.583 SO libspdk_bdev_split.so.6.0 00:04:27.583 CC module/bdev/aio/bdev_aio.o 00:04:27.841 CC module/bdev/xnvme/bdev_xnvme.o 00:04:27.841 SYMLINK libspdk_bdev_split.so 00:04:27.841 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:27.841 LIB libspdk_bdev_zone_block.a 00:04:27.841 SO libspdk_bdev_zone_block.so.6.0 00:04:27.841 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:27.841 CC module/bdev/nvme/nvme_rpc.o 00:04:27.841 CC module/bdev/aio/bdev_aio_rpc.o 00:04:27.841 SYMLINK libspdk_bdev_zone_block.so 00:04:27.841 CC module/bdev/nvme/bdev_mdns_client.o 00:04:27.841 CC module/bdev/ftl/bdev_ftl.o 00:04:28.099 LIB libspdk_bdev_xnvme.a 00:04:28.099 SO libspdk_bdev_xnvme.so.3.0 00:04:28.099 LIB libspdk_bdev_aio.a 00:04:28.099 CC module/bdev/nvme/vbdev_opal.o 00:04:28.099 SYMLINK libspdk_bdev_xnvme.so 00:04:28.099 SO libspdk_bdev_aio.so.6.0 00:04:28.099 CC module/bdev/iscsi/bdev_iscsi.o 00:04:28.099 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:28.099 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:28.099 SYMLINK libspdk_bdev_aio.so 00:04:28.099 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:28.099 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:28.358 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:28.358 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:28.358 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:28.358 LIB libspdk_bdev_ftl.a 00:04:28.358 SO libspdk_bdev_ftl.so.6.0 00:04:28.358 LIB libspdk_bdev_raid.a 00:04:28.358 SYMLINK libspdk_bdev_ftl.so 00:04:28.358 LIB libspdk_bdev_iscsi.a 00:04:28.617 SO libspdk_bdev_iscsi.so.6.0 00:04:28.618 SO libspdk_bdev_raid.so.6.0 00:04:28.618 SYMLINK libspdk_bdev_iscsi.so 00:04:28.618 SYMLINK libspdk_bdev_raid.so 00:04:28.877 LIB libspdk_bdev_virtio.a 00:04:28.877 SO libspdk_bdev_virtio.so.6.0 00:04:28.877 SYMLINK libspdk_bdev_virtio.so 00:04:30.252 LIB libspdk_bdev_nvme.a 00:04:30.252 SO libspdk_bdev_nvme.so.7.1 00:04:30.252 SYMLINK libspdk_bdev_nvme.so 00:04:30.821 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:30.821 CC module/event/subsystems/fsdev/fsdev.o 00:04:30.821 CC module/event/subsystems/sock/sock.o 00:04:30.821 CC module/event/subsystems/vmd/vmd.o 00:04:30.821 CC module/event/subsystems/iobuf/iobuf.o 00:04:30.821 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:30.821 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:30.821 CC module/event/subsystems/keyring/keyring.o 00:04:30.821 CC module/event/subsystems/scheduler/scheduler.o 00:04:31.080 LIB libspdk_event_fsdev.a 00:04:31.080 LIB libspdk_event_sock.a 00:04:31.081 LIB libspdk_event_vmd.a 00:04:31.081 LIB libspdk_event_keyring.a 00:04:31.081 LIB libspdk_event_vhost_blk.a 00:04:31.081 LIB libspdk_event_iobuf.a 00:04:31.081 SO libspdk_event_fsdev.so.1.0 00:04:31.081 LIB libspdk_event_scheduler.a 00:04:31.081 SO libspdk_event_sock.so.5.0 00:04:31.081 SO libspdk_event_keyring.so.1.0 00:04:31.081 SO libspdk_event_vmd.so.6.0 00:04:31.081 SO libspdk_event_vhost_blk.so.3.0 00:04:31.081 SO libspdk_event_iobuf.so.3.0 00:04:31.081 SO libspdk_event_scheduler.so.4.0 00:04:31.081 SYMLINK libspdk_event_fsdev.so 00:04:31.081 SYMLINK libspdk_event_sock.so 00:04:31.081 SYMLINK libspdk_event_keyring.so 00:04:31.081 SYMLINK libspdk_event_vmd.so 00:04:31.081 SYMLINK libspdk_event_vhost_blk.so 00:04:31.081 SYMLINK libspdk_event_iobuf.so 00:04:31.081 SYMLINK libspdk_event_scheduler.so 00:04:31.650 CC module/event/subsystems/accel/accel.o 00:04:31.650 LIB libspdk_event_accel.a 00:04:31.650 SO libspdk_event_accel.so.6.0 00:04:31.910 SYMLINK libspdk_event_accel.so 00:04:32.170 CC module/event/subsystems/bdev/bdev.o 00:04:32.430 LIB libspdk_event_bdev.a 00:04:32.430 SO libspdk_event_bdev.so.6.0 00:04:32.690 SYMLINK libspdk_event_bdev.so 00:04:32.949 CC module/event/subsystems/scsi/scsi.o 00:04:32.949 CC module/event/subsystems/nbd/nbd.o 00:04:32.949 CC module/event/subsystems/ublk/ublk.o 00:04:32.949 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:32.949 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:32.949 LIB libspdk_event_scsi.a 00:04:32.949 LIB libspdk_event_nbd.a 00:04:33.209 LIB libspdk_event_ublk.a 00:04:33.209 SO libspdk_event_scsi.so.6.0 00:04:33.209 SO libspdk_event_nbd.so.6.0 00:04:33.209 SO libspdk_event_ublk.so.3.0 00:04:33.209 SYMLINK libspdk_event_scsi.so 00:04:33.209 SYMLINK libspdk_event_nbd.so 00:04:33.209 LIB libspdk_event_nvmf.a 00:04:33.209 SYMLINK libspdk_event_ublk.so 00:04:33.209 SO libspdk_event_nvmf.so.6.0 00:04:33.209 SYMLINK libspdk_event_nvmf.so 00:04:33.468 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:33.468 CC module/event/subsystems/iscsi/iscsi.o 00:04:33.729 LIB libspdk_event_vhost_scsi.a 00:04:33.729 LIB libspdk_event_iscsi.a 00:04:33.729 SO libspdk_event_vhost_scsi.so.3.0 00:04:33.729 SO libspdk_event_iscsi.so.6.0 00:04:33.729 SYMLINK libspdk_event_vhost_scsi.so 00:04:33.729 SYMLINK libspdk_event_iscsi.so 00:04:33.990 SO libspdk.so.6.0 00:04:33.990 SYMLINK libspdk.so 00:04:34.249 CC test/rpc_client/rpc_client_test.o 00:04:34.249 TEST_HEADER include/spdk/accel.h 00:04:34.249 CXX app/trace/trace.o 00:04:34.249 TEST_HEADER include/spdk/accel_module.h 00:04:34.249 TEST_HEADER include/spdk/assert.h 00:04:34.249 TEST_HEADER include/spdk/barrier.h 00:04:34.249 TEST_HEADER include/spdk/base64.h 00:04:34.249 TEST_HEADER include/spdk/bdev.h 00:04:34.249 TEST_HEADER include/spdk/bdev_module.h 00:04:34.249 TEST_HEADER include/spdk/bdev_zone.h 00:04:34.249 TEST_HEADER include/spdk/bit_array.h 00:04:34.509 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:34.509 TEST_HEADER include/spdk/bit_pool.h 00:04:34.509 TEST_HEADER include/spdk/blob_bdev.h 00:04:34.509 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:34.509 TEST_HEADER include/spdk/blobfs.h 00:04:34.509 TEST_HEADER include/spdk/blob.h 00:04:34.509 TEST_HEADER include/spdk/conf.h 00:04:34.509 TEST_HEADER include/spdk/config.h 00:04:34.509 TEST_HEADER include/spdk/cpuset.h 00:04:34.509 TEST_HEADER include/spdk/crc16.h 00:04:34.509 TEST_HEADER include/spdk/crc32.h 00:04:34.509 TEST_HEADER include/spdk/crc64.h 00:04:34.509 TEST_HEADER include/spdk/dif.h 00:04:34.509 TEST_HEADER include/spdk/dma.h 00:04:34.509 TEST_HEADER include/spdk/endian.h 00:04:34.509 TEST_HEADER include/spdk/env_dpdk.h 00:04:34.509 TEST_HEADER include/spdk/env.h 00:04:34.509 TEST_HEADER include/spdk/event.h 00:04:34.509 TEST_HEADER include/spdk/fd_group.h 00:04:34.509 TEST_HEADER include/spdk/fd.h 00:04:34.509 TEST_HEADER include/spdk/file.h 00:04:34.509 TEST_HEADER include/spdk/fsdev.h 00:04:34.509 TEST_HEADER include/spdk/fsdev_module.h 00:04:34.509 TEST_HEADER include/spdk/ftl.h 00:04:34.509 CC test/thread/poller_perf/poller_perf.o 00:04:34.509 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:34.509 CC examples/util/zipf/zipf.o 00:04:34.509 TEST_HEADER include/spdk/gpt_spec.h 00:04:34.509 TEST_HEADER include/spdk/hexlify.h 00:04:34.509 TEST_HEADER include/spdk/histogram_data.h 00:04:34.509 TEST_HEADER include/spdk/idxd.h 00:04:34.509 CC examples/ioat/perf/perf.o 00:04:34.509 TEST_HEADER include/spdk/idxd_spec.h 00:04:34.509 TEST_HEADER include/spdk/init.h 00:04:34.509 TEST_HEADER include/spdk/ioat.h 00:04:34.509 TEST_HEADER include/spdk/ioat_spec.h 00:04:34.509 CC test/app/bdev_svc/bdev_svc.o 00:04:34.509 TEST_HEADER include/spdk/iscsi_spec.h 00:04:34.509 CC test/dma/test_dma/test_dma.o 00:04:34.509 TEST_HEADER include/spdk/json.h 00:04:34.509 TEST_HEADER include/spdk/jsonrpc.h 00:04:34.509 TEST_HEADER include/spdk/keyring.h 00:04:34.509 TEST_HEADER include/spdk/keyring_module.h 00:04:34.509 TEST_HEADER include/spdk/likely.h 00:04:34.509 TEST_HEADER include/spdk/log.h 00:04:34.509 TEST_HEADER include/spdk/lvol.h 00:04:34.509 TEST_HEADER include/spdk/md5.h 00:04:34.509 TEST_HEADER include/spdk/memory.h 00:04:34.509 TEST_HEADER include/spdk/mmio.h 00:04:34.509 TEST_HEADER include/spdk/nbd.h 00:04:34.509 TEST_HEADER include/spdk/net.h 00:04:34.509 TEST_HEADER include/spdk/notify.h 00:04:34.509 TEST_HEADER include/spdk/nvme.h 00:04:34.509 TEST_HEADER include/spdk/nvme_intel.h 00:04:34.509 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:34.509 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:34.509 TEST_HEADER include/spdk/nvme_spec.h 00:04:34.509 TEST_HEADER include/spdk/nvme_zns.h 00:04:34.509 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:34.509 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:34.509 CC test/env/mem_callbacks/mem_callbacks.o 00:04:34.509 TEST_HEADER include/spdk/nvmf.h 00:04:34.509 TEST_HEADER include/spdk/nvmf_spec.h 00:04:34.509 TEST_HEADER include/spdk/nvmf_transport.h 00:04:34.509 TEST_HEADER include/spdk/opal.h 00:04:34.509 TEST_HEADER include/spdk/opal_spec.h 00:04:34.509 TEST_HEADER include/spdk/pci_ids.h 00:04:34.509 TEST_HEADER include/spdk/pipe.h 00:04:34.509 TEST_HEADER include/spdk/queue.h 00:04:34.509 TEST_HEADER include/spdk/reduce.h 00:04:34.509 TEST_HEADER include/spdk/rpc.h 00:04:34.509 TEST_HEADER include/spdk/scheduler.h 00:04:34.509 TEST_HEADER include/spdk/scsi.h 00:04:34.509 TEST_HEADER include/spdk/scsi_spec.h 00:04:34.509 TEST_HEADER include/spdk/sock.h 00:04:34.509 TEST_HEADER include/spdk/stdinc.h 00:04:34.509 TEST_HEADER include/spdk/string.h 00:04:34.509 LINK rpc_client_test 00:04:34.509 TEST_HEADER include/spdk/thread.h 00:04:34.509 TEST_HEADER include/spdk/trace.h 00:04:34.509 TEST_HEADER include/spdk/trace_parser.h 00:04:34.509 TEST_HEADER include/spdk/tree.h 00:04:34.509 TEST_HEADER include/spdk/ublk.h 00:04:34.509 TEST_HEADER include/spdk/util.h 00:04:34.509 TEST_HEADER include/spdk/uuid.h 00:04:34.509 TEST_HEADER include/spdk/version.h 00:04:34.509 LINK interrupt_tgt 00:04:34.509 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:34.509 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:34.509 TEST_HEADER include/spdk/vhost.h 00:04:34.509 TEST_HEADER include/spdk/vmd.h 00:04:34.509 TEST_HEADER include/spdk/xor.h 00:04:34.509 TEST_HEADER include/spdk/zipf.h 00:04:34.509 CXX test/cpp_headers/accel.o 00:04:34.509 LINK zipf 00:04:34.509 LINK poller_perf 00:04:34.768 LINK bdev_svc 00:04:34.768 LINK ioat_perf 00:04:34.768 CXX test/cpp_headers/accel_module.o 00:04:34.768 LINK spdk_trace 00:04:34.768 CC test/env/vtophys/vtophys.o 00:04:34.768 CC test/env/memory/memory_ut.o 00:04:34.768 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:34.768 CC test/env/pci/pci_ut.o 00:04:34.768 CC examples/ioat/verify/verify.o 00:04:35.027 CXX test/cpp_headers/assert.o 00:04:35.027 LINK vtophys 00:04:35.027 LINK test_dma 00:04:35.027 LINK env_dpdk_post_init 00:04:35.027 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:35.027 CC app/trace_record/trace_record.o 00:04:35.027 LINK mem_callbacks 00:04:35.027 CXX test/cpp_headers/barrier.o 00:04:35.027 LINK verify 00:04:35.027 CXX test/cpp_headers/base64.o 00:04:35.285 CXX test/cpp_headers/bdev.o 00:04:35.285 CC app/nvmf_tgt/nvmf_main.o 00:04:35.285 LINK pci_ut 00:04:35.285 LINK spdk_trace_record 00:04:35.285 CC app/iscsi_tgt/iscsi_tgt.o 00:04:35.285 CXX test/cpp_headers/bdev_module.o 00:04:35.285 LINK nvmf_tgt 00:04:35.285 CC examples/sock/hello_world/hello_sock.o 00:04:35.544 LINK nvme_fuzz 00:04:35.544 CC examples/thread/thread/thread_ex.o 00:04:35.544 CC examples/vmd/lsvmd/lsvmd.o 00:04:35.544 CXX test/cpp_headers/bdev_zone.o 00:04:35.544 LINK iscsi_tgt 00:04:35.544 CXX test/cpp_headers/bit_array.o 00:04:35.544 LINK lsvmd 00:04:35.544 CC examples/idxd/perf/perf.o 00:04:35.802 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:35.802 LINK thread 00:04:35.802 LINK hello_sock 00:04:35.802 CC test/event/event_perf/event_perf.o 00:04:35.802 CC test/event/reactor/reactor.o 00:04:35.802 CXX test/cpp_headers/bit_pool.o 00:04:35.802 CXX test/cpp_headers/blob_bdev.o 00:04:35.802 CC examples/vmd/led/led.o 00:04:35.802 CC app/spdk_tgt/spdk_tgt.o 00:04:35.802 LINK reactor 00:04:35.802 LINK event_perf 00:04:36.062 LINK idxd_perf 00:04:36.062 LINK memory_ut 00:04:36.062 LINK led 00:04:36.062 CXX test/cpp_headers/blobfs_bdev.o 00:04:36.062 CC test/nvme/aer/aer.o 00:04:36.062 LINK spdk_tgt 00:04:36.062 CC test/event/reactor_perf/reactor_perf.o 00:04:36.062 CC test/accel/dif/dif.o 00:04:36.321 CXX test/cpp_headers/blobfs.o 00:04:36.321 CC test/nvme/reset/reset.o 00:04:36.321 CC test/blobfs/mkfs/mkfs.o 00:04:36.321 LINK reactor_perf 00:04:36.321 CC examples/nvme/hello_world/hello_world.o 00:04:36.321 CC app/spdk_lspci/spdk_lspci.o 00:04:36.322 CC test/lvol/esnap/esnap.o 00:04:36.322 CXX test/cpp_headers/blob.o 00:04:36.322 LINK aer 00:04:36.581 LINK mkfs 00:04:36.581 LINK reset 00:04:36.581 LINK spdk_lspci 00:04:36.581 CC test/event/app_repeat/app_repeat.o 00:04:36.581 CXX test/cpp_headers/conf.o 00:04:36.581 LINK hello_world 00:04:36.581 CC examples/nvme/reconnect/reconnect.o 00:04:36.581 LINK app_repeat 00:04:36.581 CXX test/cpp_headers/config.o 00:04:36.840 CXX test/cpp_headers/cpuset.o 00:04:36.840 CC app/spdk_nvme_perf/perf.o 00:04:36.840 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:36.840 CC test/nvme/sgl/sgl.o 00:04:36.840 CC test/nvme/e2edp/nvme_dp.o 00:04:36.840 CXX test/cpp_headers/crc16.o 00:04:36.840 LINK dif 00:04:37.099 CC test/event/scheduler/scheduler.o 00:04:37.099 LINK reconnect 00:04:37.099 CXX test/cpp_headers/crc32.o 00:04:37.099 LINK sgl 00:04:37.099 LINK nvme_dp 00:04:37.099 CC test/nvme/overhead/overhead.o 00:04:37.099 LINK scheduler 00:04:37.099 CXX test/cpp_headers/crc64.o 00:04:37.358 CC test/nvme/err_injection/err_injection.o 00:04:37.358 LINK nvme_manage 00:04:37.358 CXX test/cpp_headers/dif.o 00:04:37.358 CXX test/cpp_headers/dma.o 00:04:37.358 LINK err_injection 00:04:37.358 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:37.358 LINK overhead 00:04:37.616 CC examples/accel/perf/accel_perf.o 00:04:37.616 LINK iscsi_fuzz 00:04:37.616 CXX test/cpp_headers/endian.o 00:04:37.616 CC examples/nvme/arbitration/arbitration.o 00:04:37.616 LINK spdk_nvme_perf 00:04:37.616 CC app/spdk_nvme_identify/identify.o 00:04:37.616 CC test/nvme/startup/startup.o 00:04:37.616 LINK hello_fsdev 00:04:37.875 CXX test/cpp_headers/env_dpdk.o 00:04:37.875 CC examples/blob/hello_world/hello_blob.o 00:04:37.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:37.875 CXX test/cpp_headers/env.o 00:04:37.875 LINK startup 00:04:37.875 CC examples/blob/cli/blobcli.o 00:04:37.875 LINK arbitration 00:04:37.875 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:37.875 LINK hello_blob 00:04:37.875 LINK accel_perf 00:04:38.133 CC test/nvme/reserve/reserve.o 00:04:38.133 CXX test/cpp_headers/event.o 00:04:38.133 CXX test/cpp_headers/fd_group.o 00:04:38.133 CXX test/cpp_headers/fd.o 00:04:38.133 LINK reserve 00:04:38.133 CC test/nvme/simple_copy/simple_copy.o 00:04:38.133 CC examples/nvme/hotplug/hotplug.o 00:04:38.133 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:38.393 CXX test/cpp_headers/file.o 00:04:38.393 LINK vhost_fuzz 00:04:38.393 CC test/bdev/bdevio/bdevio.o 00:04:38.393 LINK cmb_copy 00:04:38.393 LINK blobcli 00:04:38.393 CC test/nvme/connect_stress/connect_stress.o 00:04:38.393 LINK hotplug 00:04:38.393 LINK simple_copy 00:04:38.698 CXX test/cpp_headers/fsdev.o 00:04:38.698 CXX test/cpp_headers/fsdev_module.o 00:04:38.698 LINK spdk_nvme_identify 00:04:38.698 CC test/app/histogram_perf/histogram_perf.o 00:04:38.698 LINK connect_stress 00:04:38.698 CXX test/cpp_headers/ftl.o 00:04:38.698 CC examples/nvme/abort/abort.o 00:04:38.698 LINK histogram_perf 00:04:38.698 CXX test/cpp_headers/fuse_dispatcher.o 00:04:38.698 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:39.040 CC app/spdk_nvme_discover/discovery_aer.o 00:04:39.040 LINK bdevio 00:04:39.040 CC app/spdk_top/spdk_top.o 00:04:39.040 CXX test/cpp_headers/gpt_spec.o 00:04:39.040 CC test/nvme/boot_partition/boot_partition.o 00:04:39.040 LINK pmr_persistence 00:04:39.040 CC test/nvme/compliance/nvme_compliance.o 00:04:39.040 CC test/app/jsoncat/jsoncat.o 00:04:39.040 LINK spdk_nvme_discover 00:04:39.040 CXX test/cpp_headers/hexlify.o 00:04:39.040 LINK boot_partition 00:04:39.040 LINK jsoncat 00:04:39.299 LINK abort 00:04:39.299 CC app/vhost/vhost.o 00:04:39.299 CC test/nvme/fused_ordering/fused_ordering.o 00:04:39.299 CXX test/cpp_headers/histogram_data.o 00:04:39.299 CXX test/cpp_headers/idxd.o 00:04:39.299 CC examples/bdev/hello_world/hello_bdev.o 00:04:39.299 LINK nvme_compliance 00:04:39.299 CC test/app/stub/stub.o 00:04:39.299 LINK vhost 00:04:39.299 CC examples/bdev/bdevperf/bdevperf.o 00:04:39.558 LINK fused_ordering 00:04:39.558 CXX test/cpp_headers/idxd_spec.o 00:04:39.558 CXX test/cpp_headers/init.o 00:04:39.558 LINK stub 00:04:39.558 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:39.558 CXX test/cpp_headers/ioat.o 00:04:39.558 LINK hello_bdev 00:04:39.558 CXX test/cpp_headers/ioat_spec.o 00:04:39.558 CXX test/cpp_headers/iscsi_spec.o 00:04:39.818 LINK doorbell_aers 00:04:39.818 CXX test/cpp_headers/json.o 00:04:39.818 CC test/nvme/fdp/fdp.o 00:04:39.818 CC test/nvme/cuse/cuse.o 00:04:39.818 CXX test/cpp_headers/jsonrpc.o 00:04:39.818 LINK spdk_top 00:04:39.818 CC app/spdk_dd/spdk_dd.o 00:04:39.818 CXX test/cpp_headers/keyring.o 00:04:39.818 CXX test/cpp_headers/keyring_module.o 00:04:39.818 CXX test/cpp_headers/likely.o 00:04:39.818 CXX test/cpp_headers/log.o 00:04:40.077 CC app/fio/nvme/fio_plugin.o 00:04:40.077 CXX test/cpp_headers/lvol.o 00:04:40.077 LINK fdp 00:04:40.077 CXX test/cpp_headers/md5.o 00:04:40.077 CXX test/cpp_headers/memory.o 00:04:40.077 CC app/fio/bdev/fio_plugin.o 00:04:40.077 CXX test/cpp_headers/mmio.o 00:04:40.335 LINK spdk_dd 00:04:40.335 CXX test/cpp_headers/nbd.o 00:04:40.335 LINK bdevperf 00:04:40.335 CXX test/cpp_headers/net.o 00:04:40.335 CXX test/cpp_headers/notify.o 00:04:40.335 CXX test/cpp_headers/nvme.o 00:04:40.335 CXX test/cpp_headers/nvme_intel.o 00:04:40.335 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.335 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.335 CXX test/cpp_headers/nvme_spec.o 00:04:40.594 CXX test/cpp_headers/nvme_zns.o 00:04:40.594 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.594 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.594 CXX test/cpp_headers/nvmf.o 00:04:40.594 LINK spdk_nvme 00:04:40.594 CXX test/cpp_headers/nvmf_spec.o 00:04:40.594 CC examples/nvmf/nvmf/nvmf.o 00:04:40.594 LINK spdk_bdev 00:04:40.594 CXX test/cpp_headers/nvmf_transport.o 00:04:40.594 CXX test/cpp_headers/opal.o 00:04:40.594 CXX test/cpp_headers/opal_spec.o 00:04:40.853 CXX test/cpp_headers/pci_ids.o 00:04:40.853 CXX test/cpp_headers/pipe.o 00:04:40.853 CXX test/cpp_headers/queue.o 00:04:40.853 CXX test/cpp_headers/reduce.o 00:04:40.853 CXX test/cpp_headers/rpc.o 00:04:40.853 CXX test/cpp_headers/scheduler.o 00:04:40.853 CXX test/cpp_headers/scsi.o 00:04:40.853 CXX test/cpp_headers/scsi_spec.o 00:04:40.853 CXX test/cpp_headers/sock.o 00:04:40.853 CXX test/cpp_headers/stdinc.o 00:04:40.853 LINK nvmf 00:04:41.111 CXX test/cpp_headers/string.o 00:04:41.111 CXX test/cpp_headers/thread.o 00:04:41.111 CXX test/cpp_headers/trace.o 00:04:41.111 LINK cuse 00:04:41.111 CXX test/cpp_headers/trace_parser.o 00:04:41.111 CXX test/cpp_headers/tree.o 00:04:41.111 CXX test/cpp_headers/ublk.o 00:04:41.111 CXX test/cpp_headers/util.o 00:04:41.111 CXX test/cpp_headers/uuid.o 00:04:41.111 CXX test/cpp_headers/version.o 00:04:41.111 CXX test/cpp_headers/vfio_user_pci.o 00:04:41.111 CXX test/cpp_headers/vfio_user_spec.o 00:04:41.111 CXX test/cpp_headers/vhost.o 00:04:41.111 CXX test/cpp_headers/vmd.o 00:04:41.111 CXX test/cpp_headers/xor.o 00:04:41.369 CXX test/cpp_headers/zipf.o 00:04:42.305 LINK esnap 00:04:42.871 00:04:42.871 real 1m23.556s 00:04:42.871 user 7m5.707s 00:04:42.871 sys 1m58.557s 00:04:42.871 03:49:25 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:42.871 ************************************ 00:04:42.871 END TEST make 00:04:42.871 ************************************ 00:04:42.871 03:49:25 make -- common/autotest_common.sh@10 -- $ set +x 00:04:42.871 03:49:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:42.871 03:49:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:42.871 03:49:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:42.871 03:49:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.871 03:49:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:42.871 03:49:25 -- pm/common@44 -- $ pid=5284 00:04:42.871 03:49:25 -- pm/common@50 -- $ kill -TERM 5284 00:04:42.871 03:49:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:42.871 03:49:25 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:42.871 03:49:25 -- pm/common@44 -- $ pid=5285 00:04:42.871 03:49:25 -- pm/common@50 -- $ kill -TERM 5285 00:04:42.871 03:49:25 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:42.871 03:49:25 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:42.871 03:49:25 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.871 03:49:25 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.871 03:49:25 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.130 03:49:25 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.130 03:49:25 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.130 03:49:25 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.130 03:49:25 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.130 03:49:25 -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.130 03:49:25 -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.130 03:49:25 -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.130 03:49:25 -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.130 03:49:25 -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.130 03:49:25 -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.130 03:49:25 -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.130 03:49:25 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.130 03:49:25 -- scripts/common.sh@344 -- # case "$op" in 00:04:43.130 03:49:25 -- scripts/common.sh@345 -- # : 1 00:04:43.130 03:49:25 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.130 03:49:25 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.130 03:49:25 -- scripts/common.sh@365 -- # decimal 1 00:04:43.130 03:49:25 -- scripts/common.sh@353 -- # local d=1 00:04:43.130 03:49:25 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.130 03:49:25 -- scripts/common.sh@355 -- # echo 1 00:04:43.130 03:49:25 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.130 03:49:25 -- scripts/common.sh@366 -- # decimal 2 00:04:43.130 03:49:25 -- scripts/common.sh@353 -- # local d=2 00:04:43.130 03:49:25 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.130 03:49:25 -- scripts/common.sh@355 -- # echo 2 00:04:43.130 03:49:25 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.130 03:49:25 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.130 03:49:25 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.130 03:49:25 -- scripts/common.sh@368 -- # return 0 00:04:43.130 03:49:25 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.130 03:49:25 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.130 --rc genhtml_branch_coverage=1 00:04:43.130 --rc genhtml_function_coverage=1 00:04:43.130 --rc genhtml_legend=1 00:04:43.130 --rc geninfo_all_blocks=1 00:04:43.130 --rc geninfo_unexecuted_blocks=1 00:04:43.130 00:04:43.130 ' 00:04:43.130 03:49:25 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.130 --rc genhtml_branch_coverage=1 00:04:43.130 --rc genhtml_function_coverage=1 00:04:43.130 --rc genhtml_legend=1 00:04:43.130 --rc geninfo_all_blocks=1 00:04:43.130 --rc geninfo_unexecuted_blocks=1 00:04:43.130 00:04:43.130 ' 00:04:43.130 03:49:25 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.130 --rc genhtml_branch_coverage=1 00:04:43.130 --rc genhtml_function_coverage=1 00:04:43.130 --rc genhtml_legend=1 00:04:43.130 --rc geninfo_all_blocks=1 00:04:43.130 --rc geninfo_unexecuted_blocks=1 00:04:43.130 00:04:43.130 ' 00:04:43.130 03:49:25 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.130 --rc genhtml_branch_coverage=1 00:04:43.130 --rc genhtml_function_coverage=1 00:04:43.130 --rc genhtml_legend=1 00:04:43.130 --rc geninfo_all_blocks=1 00:04:43.130 --rc geninfo_unexecuted_blocks=1 00:04:43.130 00:04:43.130 ' 00:04:43.130 03:49:25 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.130 03:49:25 -- nvmf/common.sh@7 -- # uname -s 00:04:43.130 03:49:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.130 03:49:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.130 03:49:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.130 03:49:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.130 03:49:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.130 03:49:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.130 03:49:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.130 03:49:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.130 03:49:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.130 03:49:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.130 03:49:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:099867bc-932e-4148-8a2f-ef14cc589e12 00:04:43.130 03:49:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=099867bc-932e-4148-8a2f-ef14cc589e12 00:04:43.130 03:49:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.130 03:49:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.130 03:49:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.131 03:49:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.131 03:49:25 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.131 03:49:25 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.131 03:49:25 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.131 03:49:25 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.131 03:49:25 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.131 03:49:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.131 03:49:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.131 03:49:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.131 03:49:25 -- paths/export.sh@5 -- # export PATH 00:04:43.131 03:49:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.131 03:49:25 -- nvmf/common.sh@51 -- # : 0 00:04:43.131 03:49:25 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.131 03:49:25 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.131 03:49:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.131 03:49:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.131 03:49:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.131 03:49:25 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.131 03:49:25 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.131 03:49:25 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.131 03:49:25 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.131 03:49:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:43.131 03:49:25 -- spdk/autotest.sh@32 -- # uname -s 00:04:43.131 03:49:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:43.131 03:49:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:43.131 03:49:25 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.131 03:49:25 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:43.131 03:49:25 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.131 03:49:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:43.131 03:49:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:43.131 03:49:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:43.131 03:49:25 -- spdk/autotest.sh@48 -- # udevadm_pid=54749 00:04:43.131 03:49:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:43.131 03:49:25 -- pm/common@17 -- # local monitor 00:04:43.131 03:49:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.131 03:49:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:43.131 03:49:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:43.131 03:49:25 -- pm/common@21 -- # date +%s 00:04:43.131 03:49:25 -- pm/common@21 -- # date +%s 00:04:43.131 03:49:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733543365 00:04:43.131 03:49:25 -- pm/common@25 -- # sleep 1 00:04:43.131 03:49:25 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733543365 00:04:43.131 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733543365_collect-cpu-load.pm.log 00:04:43.131 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733543365_collect-vmstat.pm.log 00:04:44.509 03:49:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:44.509 03:49:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:44.509 03:49:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.509 03:49:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.509 03:49:26 -- spdk/autotest.sh@59 -- # create_test_list 00:04:44.509 03:49:26 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:44.509 03:49:26 -- common/autotest_common.sh@10 -- # set +x 00:04:44.509 03:49:26 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:44.509 03:49:26 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:44.509 03:49:26 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:44.509 03:49:26 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:44.509 03:49:26 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:44.509 03:49:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:44.509 03:49:26 -- common/autotest_common.sh@1457 -- # uname 00:04:44.509 03:49:26 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:44.509 03:49:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:44.509 03:49:26 -- common/autotest_common.sh@1477 -- # uname 00:04:44.509 03:49:26 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:44.509 03:49:26 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:44.509 03:49:26 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:44.509 lcov: LCOV version 1.15 00:04:44.509 03:49:27 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:59.401 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:59.401 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:17.505 03:49:57 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:17.505 03:49:57 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.505 03:49:57 -- common/autotest_common.sh@10 -- # set +x 00:05:17.505 03:49:57 -- spdk/autotest.sh@78 -- # rm -f 00:05:17.505 03:49:57 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.505 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.505 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:17.505 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:17.505 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:17.505 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:17.505 03:49:58 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:17.505 03:49:58 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:17.505 03:49:58 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:17.505 03:49:58 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:05:17.505 03:49:58 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:05:17.505 03:49:58 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:05:17.505 03:49:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:05:17.505 03:49:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:05:17.505 03:49:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:05:17.505 03:49:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:05:17.505 03:49:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:05:17.505 03:49:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:05:17.505 03:49:58 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:05:17.505 03:49:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:05:17.505 03:49:58 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:05:17.505 03:49:58 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:05:17.505 03:49:58 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:17.505 03:49:58 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:17.505 03:49:58 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:17.505 03:49:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.505 03:49:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.505 03:49:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:17.505 03:49:58 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:17.505 03:49:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:17.505 No valid GPT data, bailing 00:05:17.505 03:49:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:17.505 03:49:58 -- scripts/common.sh@394 -- # pt= 00:05:17.505 03:49:58 -- scripts/common.sh@395 -- # return 1 00:05:17.505 03:49:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:17.505 1+0 records in 00:05:17.506 1+0 records out 00:05:17.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210205 s, 49.9 MB/s 00:05:17.506 03:49:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.506 03:49:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.506 03:49:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:17.506 03:49:58 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:17.506 03:49:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:17.506 No valid GPT data, bailing 00:05:17.506 03:49:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:17.506 03:49:58 -- scripts/common.sh@394 -- # pt= 00:05:17.506 03:49:58 -- scripts/common.sh@395 -- # return 1 00:05:17.506 03:49:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:17.506 1+0 records in 00:05:17.506 1+0 records out 00:05:17.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0063056 s, 166 MB/s 00:05:17.506 03:49:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.506 03:49:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.506 03:49:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:05:17.506 03:49:58 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:05:17.506 03:49:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:17.506 No valid GPT data, bailing 00:05:17.506 03:49:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:17.506 03:49:58 -- scripts/common.sh@394 -- # pt= 00:05:17.506 03:49:58 -- scripts/common.sh@395 -- # return 1 00:05:17.506 03:49:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:17.506 1+0 records in 00:05:17.506 1+0 records out 00:05:17.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614444 s, 171 MB/s 00:05:17.506 03:49:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.506 03:49:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.506 03:49:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:05:17.506 03:49:58 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:05:17.506 03:49:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:05:17.506 No valid GPT data, bailing 00:05:17.506 03:49:58 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:17.506 03:49:58 -- scripts/common.sh@394 -- # pt= 00:05:17.506 03:49:58 -- scripts/common.sh@395 -- # return 1 00:05:17.506 03:49:58 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:05:17.506 1+0 records in 00:05:17.506 1+0 records out 00:05:17.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062871 s, 167 MB/s 00:05:17.506 03:49:58 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.506 03:49:58 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.506 03:49:58 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:05:17.506 03:49:58 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:05:17.506 03:49:58 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:05:17.506 No valid GPT data, bailing 00:05:17.506 03:49:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:17.506 03:49:59 -- scripts/common.sh@394 -- # pt= 00:05:17.506 03:49:59 -- scripts/common.sh@395 -- # return 1 00:05:17.506 03:49:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:05:17.506 1+0 records in 00:05:17.506 1+0 records out 00:05:17.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00669087 s, 157 MB/s 00:05:17.506 03:49:59 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:17.506 03:49:59 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:17.506 03:49:59 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:05:17.506 03:49:59 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:05:17.506 03:49:59 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:17.506 No valid GPT data, bailing 00:05:17.506 03:49:59 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:17.506 03:49:59 -- scripts/common.sh@394 -- # pt= 00:05:17.506 03:49:59 -- scripts/common.sh@395 -- # return 1 00:05:17.506 03:49:59 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:17.506 1+0 records in 00:05:17.506 1+0 records out 00:05:17.506 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062687 s, 167 MB/s 00:05:17.506 03:49:59 -- spdk/autotest.sh@105 -- # sync 00:05:17.506 03:49:59 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:17.506 03:49:59 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:17.506 03:49:59 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:20.044 03:50:02 -- spdk/autotest.sh@111 -- # uname -s 00:05:20.044 03:50:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:20.044 03:50:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:20.044 03:50:02 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:20.303 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.872 Hugepages 00:05:20.872 node hugesize free / total 00:05:20.872 node0 1048576kB 0 / 0 00:05:20.872 node0 2048kB 0 / 0 00:05:20.872 00:05:20.872 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:21.130 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:21.130 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:21.388 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:21.388 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:21.646 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:21.646 03:50:04 -- spdk/autotest.sh@117 -- # uname -s 00:05:21.646 03:50:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:21.646 03:50:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:21.646 03:50:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.148 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.148 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.148 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.148 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:23.148 03:50:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:24.529 03:50:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:24.529 03:50:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:24.529 03:50:06 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:24.529 03:50:06 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:24.529 03:50:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:24.529 03:50:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:24.529 03:50:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:24.529 03:50:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:24.529 03:50:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:24.529 03:50:06 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:24.529 03:50:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:24.529 03:50:06 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.098 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.098 Waiting for block devices as requested 00:05:25.358 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.358 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.618 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:25.618 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:05:30.963 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:05:30.963 03:50:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:30.963 03:50:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:30.963 03:50:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:30.963 03:50:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1543 -- # continue 00:05:30.963 03:50:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:30.963 03:50:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1543 -- # continue 00:05:30.963 03:50:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:30.963 03:50:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1543 -- # continue 00:05:30.963 03:50:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:05:30.963 03:50:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:30.963 03:50:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:30.963 03:50:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:30.963 03:50:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:30.963 03:50:13 -- common/autotest_common.sh@1543 -- # continue 00:05:30.963 03:50:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:30.963 03:50:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.963 03:50:13 -- common/autotest_common.sh@10 -- # set +x 00:05:30.963 03:50:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:30.963 03:50:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.963 03:50:13 -- common/autotest_common.sh@10 -- # set +x 00:05:30.964 03:50:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:31.903 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:32.474 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.474 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.474 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.733 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:32.733 03:50:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:32.733 03:50:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:32.733 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:32.733 03:50:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:32.733 03:50:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:32.733 03:50:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:32.733 03:50:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:32.733 03:50:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:32.733 03:50:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:32.733 03:50:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:32.733 03:50:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:32.733 03:50:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:32.733 03:50:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:32.733 03:50:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:32.733 03:50:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:32.733 03:50:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:32.992 03:50:15 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:05:32.992 03:50:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:32.992 03:50:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:32.992 03:50:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:32.992 03:50:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:32.992 03:50:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.992 03:50:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:32.992 03:50:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:32.992 03:50:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:32.992 03:50:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.992 03:50:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:32.992 03:50:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:05:32.993 03:50:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:32.993 03:50:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.993 03:50:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:32.993 03:50:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:05:32.993 03:50:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:32.993 03:50:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:32.993 03:50:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:32.993 03:50:15 -- common/autotest_common.sh@1572 -- # return 0 00:05:32.993 03:50:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:32.993 03:50:15 -- common/autotest_common.sh@1580 -- # return 0 00:05:32.993 03:50:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:32.993 03:50:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:32.993 03:50:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:32.993 03:50:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:32.993 03:50:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:32.993 03:50:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:32.993 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:32.993 03:50:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:32.993 03:50:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.993 03:50:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.993 03:50:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.993 03:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:32.993 ************************************ 00:05:32.993 START TEST env 00:05:32.993 ************************************ 00:05:32.993 03:50:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:32.993 * Looking for test storage... 00:05:32.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:32.993 03:50:15 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:32.993 03:50:15 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:32.993 03:50:15 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.258 03:50:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.258 03:50:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.258 03:50:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.258 03:50:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.258 03:50:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.258 03:50:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.258 03:50:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.258 03:50:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.258 03:50:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.258 03:50:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.258 03:50:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.258 03:50:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:33.258 03:50:15 env -- scripts/common.sh@345 -- # : 1 00:05:33.258 03:50:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.258 03:50:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.258 03:50:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:33.258 03:50:15 env -- scripts/common.sh@353 -- # local d=1 00:05:33.258 03:50:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.258 03:50:15 env -- scripts/common.sh@355 -- # echo 1 00:05:33.258 03:50:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.258 03:50:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:33.258 03:50:15 env -- scripts/common.sh@353 -- # local d=2 00:05:33.258 03:50:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.258 03:50:15 env -- scripts/common.sh@355 -- # echo 2 00:05:33.258 03:50:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.258 03:50:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.258 03:50:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.258 03:50:15 env -- scripts/common.sh@368 -- # return 0 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.258 --rc genhtml_branch_coverage=1 00:05:33.258 --rc genhtml_function_coverage=1 00:05:33.258 --rc genhtml_legend=1 00:05:33.258 --rc geninfo_all_blocks=1 00:05:33.258 --rc geninfo_unexecuted_blocks=1 00:05:33.258 00:05:33.258 ' 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.258 --rc genhtml_branch_coverage=1 00:05:33.258 --rc genhtml_function_coverage=1 00:05:33.258 --rc genhtml_legend=1 00:05:33.258 --rc geninfo_all_blocks=1 00:05:33.258 --rc geninfo_unexecuted_blocks=1 00:05:33.258 00:05:33.258 ' 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.258 --rc genhtml_branch_coverage=1 00:05:33.258 --rc genhtml_function_coverage=1 00:05:33.258 --rc genhtml_legend=1 00:05:33.258 --rc geninfo_all_blocks=1 00:05:33.258 --rc geninfo_unexecuted_blocks=1 00:05:33.258 00:05:33.258 ' 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.258 --rc genhtml_branch_coverage=1 00:05:33.258 --rc genhtml_function_coverage=1 00:05:33.258 --rc genhtml_legend=1 00:05:33.258 --rc geninfo_all_blocks=1 00:05:33.258 --rc geninfo_unexecuted_blocks=1 00:05:33.258 00:05:33.258 ' 00:05:33.258 03:50:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.258 03:50:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.259 03:50:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.259 ************************************ 00:05:33.259 START TEST env_memory 00:05:33.259 ************************************ 00:05:33.259 03:50:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:33.259 00:05:33.259 00:05:33.259 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.259 http://cunit.sourceforge.net/ 00:05:33.259 00:05:33.259 00:05:33.259 Suite: memory 00:05:33.259 Test: alloc and free memory map ...[2024-12-07 03:50:15.895629] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:33.259 passed 00:05:33.259 Test: mem map translation ...[2024-12-07 03:50:15.940355] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:33.259 [2024-12-07 03:50:15.940400] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:33.259 [2024-12-07 03:50:15.940465] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:33.259 [2024-12-07 03:50:15.940504] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:33.518 passed 00:05:33.518 Test: mem map registration ...[2024-12-07 03:50:16.008487] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:33.518 [2024-12-07 03:50:16.008533] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:33.518 passed 00:05:33.518 Test: mem map adjacent registrations ...passed 00:05:33.518 00:05:33.518 Run Summary: Type Total Ran Passed Failed Inactive 00:05:33.518 suites 1 1 n/a 0 0 00:05:33.518 tests 4 4 4 0 0 00:05:33.518 asserts 152 152 152 0 n/a 00:05:33.518 00:05:33.518 Elapsed time = 0.243 seconds 00:05:33.518 00:05:33.518 real 0m0.296s 00:05:33.518 user 0m0.257s 00:05:33.518 sys 0m0.029s 00:05:33.518 03:50:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.519 03:50:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:33.519 ************************************ 00:05:33.519 END TEST env_memory 00:05:33.519 ************************************ 00:05:33.519 03:50:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:33.519 03:50:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.519 03:50:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.519 03:50:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:33.519 ************************************ 00:05:33.519 START TEST env_vtophys 00:05:33.519 ************************************ 00:05:33.519 03:50:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:33.519 EAL: lib.eal log level changed from notice to debug 00:05:33.519 EAL: Detected lcore 0 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 1 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 2 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 3 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 4 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 5 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 6 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 7 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 8 as core 0 on socket 0 00:05:33.519 EAL: Detected lcore 9 as core 0 on socket 0 00:05:33.778 EAL: Maximum logical cores by configuration: 128 00:05:33.778 EAL: Detected CPU lcores: 10 00:05:33.778 EAL: Detected NUMA nodes: 1 00:05:33.778 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:33.778 EAL: Detected shared linkage of DPDK 00:05:33.778 EAL: No shared files mode enabled, IPC will be disabled 00:05:33.778 EAL: Selected IOVA mode 'PA' 00:05:33.778 EAL: Probing VFIO support... 00:05:33.778 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:33.778 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:33.778 EAL: Ask a virtual area of 0x2e000 bytes 00:05:33.778 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:33.778 EAL: Setting up physically contiguous memory... 00:05:33.778 EAL: Setting maximum number of open files to 524288 00:05:33.778 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:33.778 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:33.778 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.778 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:33.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.778 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.778 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:33.778 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:33.778 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.778 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:33.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.778 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.778 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:33.778 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:33.778 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.778 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:33.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.778 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.778 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:33.778 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:33.778 EAL: Ask a virtual area of 0x61000 bytes 00:05:33.778 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:33.778 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:33.778 EAL: Ask a virtual area of 0x400000000 bytes 00:05:33.778 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:33.778 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:33.778 EAL: Hugepages will be freed exactly as allocated. 00:05:33.778 EAL: No shared files mode enabled, IPC is disabled 00:05:33.778 EAL: No shared files mode enabled, IPC is disabled 00:05:33.778 EAL: TSC frequency is ~2490000 KHz 00:05:33.778 EAL: Main lcore 0 is ready (tid=7fe61e242a40;cpuset=[0]) 00:05:33.778 EAL: Trying to obtain current memory policy. 00:05:33.778 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:33.778 EAL: Restoring previous memory policy: 0 00:05:33.778 EAL: request: mp_malloc_sync 00:05:33.778 EAL: No shared files mode enabled, IPC is disabled 00:05:33.778 EAL: Heap on socket 0 was expanded by 2MB 00:05:33.778 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:33.778 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:33.778 EAL: Mem event callback 'spdk:(nil)' registered 00:05:33.778 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:33.778 00:05:33.778 00:05:33.778 CUnit - A unit testing framework for C - Version 2.1-3 00:05:33.778 http://cunit.sourceforge.net/ 00:05:33.778 00:05:33.778 00:05:33.778 Suite: components_suite 00:05:34.347 Test: vtophys_malloc_test ...passed 00:05:34.347 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:34.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.347 EAL: Restoring previous memory policy: 4 00:05:34.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.347 EAL: request: mp_malloc_sync 00:05:34.347 EAL: No shared files mode enabled, IPC is disabled 00:05:34.347 EAL: Heap on socket 0 was expanded by 4MB 00:05:34.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.347 EAL: request: mp_malloc_sync 00:05:34.347 EAL: No shared files mode enabled, IPC is disabled 00:05:34.347 EAL: Heap on socket 0 was shrunk by 4MB 00:05:34.347 EAL: Trying to obtain current memory policy. 00:05:34.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.347 EAL: Restoring previous memory policy: 4 00:05:34.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.347 EAL: request: mp_malloc_sync 00:05:34.347 EAL: No shared files mode enabled, IPC is disabled 00:05:34.347 EAL: Heap on socket 0 was expanded by 6MB 00:05:34.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.347 EAL: request: mp_malloc_sync 00:05:34.347 EAL: No shared files mode enabled, IPC is disabled 00:05:34.347 EAL: Heap on socket 0 was shrunk by 6MB 00:05:34.347 EAL: Trying to obtain current memory policy. 00:05:34.347 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.347 EAL: Restoring previous memory policy: 4 00:05:34.347 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.347 EAL: request: mp_malloc_sync 00:05:34.348 EAL: No shared files mode enabled, IPC is disabled 00:05:34.348 EAL: Heap on socket 0 was expanded by 10MB 00:05:34.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.348 EAL: request: mp_malloc_sync 00:05:34.348 EAL: No shared files mode enabled, IPC is disabled 00:05:34.348 EAL: Heap on socket 0 was shrunk by 10MB 00:05:34.348 EAL: Trying to obtain current memory policy. 00:05:34.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.348 EAL: Restoring previous memory policy: 4 00:05:34.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.348 EAL: request: mp_malloc_sync 00:05:34.348 EAL: No shared files mode enabled, IPC is disabled 00:05:34.348 EAL: Heap on socket 0 was expanded by 18MB 00:05:34.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.348 EAL: request: mp_malloc_sync 00:05:34.348 EAL: No shared files mode enabled, IPC is disabled 00:05:34.348 EAL: Heap on socket 0 was shrunk by 18MB 00:05:34.348 EAL: Trying to obtain current memory policy. 00:05:34.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.348 EAL: Restoring previous memory policy: 4 00:05:34.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.348 EAL: request: mp_malloc_sync 00:05:34.348 EAL: No shared files mode enabled, IPC is disabled 00:05:34.348 EAL: Heap on socket 0 was expanded by 34MB 00:05:34.348 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.348 EAL: request: mp_malloc_sync 00:05:34.348 EAL: No shared files mode enabled, IPC is disabled 00:05:34.348 EAL: Heap on socket 0 was shrunk by 34MB 00:05:34.607 EAL: Trying to obtain current memory policy. 00:05:34.607 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.607 EAL: Restoring previous memory policy: 4 00:05:34.607 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.607 EAL: request: mp_malloc_sync 00:05:34.607 EAL: No shared files mode enabled, IPC is disabled 00:05:34.607 EAL: Heap on socket 0 was expanded by 66MB 00:05:34.607 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.607 EAL: request: mp_malloc_sync 00:05:34.607 EAL: No shared files mode enabled, IPC is disabled 00:05:34.607 EAL: Heap on socket 0 was shrunk by 66MB 00:05:34.866 EAL: Trying to obtain current memory policy. 00:05:34.866 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:34.866 EAL: Restoring previous memory policy: 4 00:05:34.866 EAL: Calling mem event callback 'spdk:(nil)' 00:05:34.866 EAL: request: mp_malloc_sync 00:05:34.866 EAL: No shared files mode enabled, IPC is disabled 00:05:34.866 EAL: Heap on socket 0 was expanded by 130MB 00:05:35.125 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.125 EAL: request: mp_malloc_sync 00:05:35.125 EAL: No shared files mode enabled, IPC is disabled 00:05:35.125 EAL: Heap on socket 0 was shrunk by 130MB 00:05:35.125 EAL: Trying to obtain current memory policy. 00:05:35.125 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:35.385 EAL: Restoring previous memory policy: 4 00:05:35.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.385 EAL: request: mp_malloc_sync 00:05:35.385 EAL: No shared files mode enabled, IPC is disabled 00:05:35.385 EAL: Heap on socket 0 was expanded by 258MB 00:05:35.644 EAL: Calling mem event callback 'spdk:(nil)' 00:05:35.644 EAL: request: mp_malloc_sync 00:05:35.644 EAL: No shared files mode enabled, IPC is disabled 00:05:35.644 EAL: Heap on socket 0 was shrunk by 258MB 00:05:36.213 EAL: Trying to obtain current memory policy. 00:05:36.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:36.213 EAL: Restoring previous memory policy: 4 00:05:36.213 EAL: Calling mem event callback 'spdk:(nil)' 00:05:36.213 EAL: request: mp_malloc_sync 00:05:36.213 EAL: No shared files mode enabled, IPC is disabled 00:05:36.213 EAL: Heap on socket 0 was expanded by 514MB 00:05:37.149 EAL: Calling mem event callback 'spdk:(nil)' 00:05:37.149 EAL: request: mp_malloc_sync 00:05:37.149 EAL: No shared files mode enabled, IPC is disabled 00:05:37.149 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.085 EAL: Trying to obtain current memory policy. 00:05:38.085 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.344 EAL: Restoring previous memory policy: 4 00:05:38.344 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.344 EAL: request: mp_malloc_sync 00:05:38.344 EAL: No shared files mode enabled, IPC is disabled 00:05:38.344 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.252 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.252 EAL: request: mp_malloc_sync 00:05:40.252 EAL: No shared files mode enabled, IPC is disabled 00:05:40.252 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:42.171 passed 00:05:42.171 00:05:42.171 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.171 suites 1 1 n/a 0 0 00:05:42.171 tests 2 2 2 0 0 00:05:42.171 asserts 5810 5810 5810 0 n/a 00:05:42.171 00:05:42.171 Elapsed time = 8.090 seconds 00:05:42.171 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.171 EAL: request: mp_malloc_sync 00:05:42.171 EAL: No shared files mode enabled, IPC is disabled 00:05:42.171 EAL: Heap on socket 0 was shrunk by 2MB 00:05:42.171 EAL: No shared files mode enabled, IPC is disabled 00:05:42.171 EAL: No shared files mode enabled, IPC is disabled 00:05:42.171 EAL: No shared files mode enabled, IPC is disabled 00:05:42.171 00:05:42.171 real 0m8.433s 00:05:42.171 user 0m7.396s 00:05:42.171 sys 0m0.881s 00:05:42.171 03:50:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.171 03:50:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 ************************************ 00:05:42.171 END TEST env_vtophys 00:05:42.171 ************************************ 00:05:42.171 03:50:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:42.171 03:50:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.171 03:50:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.171 03:50:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 ************************************ 00:05:42.171 START TEST env_pci 00:05:42.171 ************************************ 00:05:42.171 03:50:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:42.171 00:05:42.171 00:05:42.171 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.171 http://cunit.sourceforge.net/ 00:05:42.171 00:05:42.171 00:05:42.171 Suite: pci 00:05:42.171 Test: pci_hook ...[2024-12-07 03:50:24.751515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57600 has claimed it 00:05:42.171 passed 00:05:42.171 00:05:42.171 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.171 suites 1 1 n/a 0 0 00:05:42.171 tests 1 1 1 0 0 00:05:42.171 asserts 25 25 25 0 n/a 00:05:42.171 00:05:42.171 Elapsed time = 0.010 seconds 00:05:42.171 EAL: Cannot find device (10000:00:01.0) 00:05:42.171 EAL: Failed to attach device on primary process 00:05:42.171 00:05:42.171 real 0m0.115s 00:05:42.171 user 0m0.047s 00:05:42.171 sys 0m0.066s 00:05:42.171 03:50:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.171 03:50:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:42.171 ************************************ 00:05:42.171 END TEST env_pci 00:05:42.171 ************************************ 00:05:42.171 03:50:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:42.171 03:50:24 env -- env/env.sh@15 -- # uname 00:05:42.171 03:50:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:42.171 03:50:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:42.171 03:50:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.171 03:50:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:42.171 03:50:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.171 03:50:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.431 ************************************ 00:05:42.431 START TEST env_dpdk_post_init 00:05:42.431 ************************************ 00:05:42.431 03:50:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:42.431 EAL: Detected CPU lcores: 10 00:05:42.431 EAL: Detected NUMA nodes: 1 00:05:42.431 EAL: Detected shared linkage of DPDK 00:05:42.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.431 EAL: Selected IOVA mode 'PA' 00:05:42.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:42.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:42.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:42.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:42.690 Starting DPDK initialization... 00:05:42.690 Starting SPDK post initialization... 00:05:42.690 SPDK NVMe probe 00:05:42.690 Attaching to 0000:00:10.0 00:05:42.690 Attaching to 0000:00:11.0 00:05:42.690 Attaching to 0000:00:12.0 00:05:42.690 Attaching to 0000:00:13.0 00:05:42.690 Attached to 0000:00:10.0 00:05:42.690 Attached to 0000:00:11.0 00:05:42.690 Attached to 0000:00:13.0 00:05:42.690 Attached to 0000:00:12.0 00:05:42.690 Cleaning up... 00:05:42.690 00:05:42.690 real 0m0.304s 00:05:42.690 user 0m0.099s 00:05:42.690 sys 0m0.108s 00:05:42.690 03:50:25 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.690 03:50:25 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:42.690 ************************************ 00:05:42.690 END TEST env_dpdk_post_init 00:05:42.690 ************************************ 00:05:42.690 03:50:25 env -- env/env.sh@26 -- # uname 00:05:42.690 03:50:25 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:42.690 03:50:25 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.690 03:50:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.690 03:50:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.690 03:50:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:42.690 ************************************ 00:05:42.690 START TEST env_mem_callbacks 00:05:42.690 ************************************ 00:05:42.690 03:50:25 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:42.690 EAL: Detected CPU lcores: 10 00:05:42.690 EAL: Detected NUMA nodes: 1 00:05:42.690 EAL: Detected shared linkage of DPDK 00:05:42.690 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:42.690 EAL: Selected IOVA mode 'PA' 00:05:42.952 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:42.952 00:05:42.952 00:05:42.952 CUnit - A unit testing framework for C - Version 2.1-3 00:05:42.952 http://cunit.sourceforge.net/ 00:05:42.952 00:05:42.952 00:05:42.952 Suite: memory 00:05:42.952 Test: test ... 00:05:42.952 register 0x200000200000 2097152 00:05:42.952 malloc 3145728 00:05:42.952 register 0x200000400000 4194304 00:05:42.952 buf 0x2000004fffc0 len 3145728 PASSED 00:05:42.952 malloc 64 00:05:42.952 buf 0x2000004ffec0 len 64 PASSED 00:05:42.952 malloc 4194304 00:05:42.952 register 0x200000800000 6291456 00:05:42.952 buf 0x2000009fffc0 len 4194304 PASSED 00:05:42.952 free 0x2000004fffc0 3145728 00:05:42.952 free 0x2000004ffec0 64 00:05:42.952 unregister 0x200000400000 4194304 PASSED 00:05:42.952 free 0x2000009fffc0 4194304 00:05:42.952 unregister 0x200000800000 6291456 PASSED 00:05:42.952 malloc 8388608 00:05:42.952 register 0x200000400000 10485760 00:05:42.952 buf 0x2000005fffc0 len 8388608 PASSED 00:05:42.952 free 0x2000005fffc0 8388608 00:05:42.952 unregister 0x200000400000 10485760 PASSED 00:05:42.952 passed 00:05:42.952 00:05:42.952 Run Summary: Type Total Ran Passed Failed Inactive 00:05:42.952 suites 1 1 n/a 0 0 00:05:42.952 tests 1 1 1 0 0 00:05:42.952 asserts 15 15 15 0 n/a 00:05:42.952 00:05:42.952 Elapsed time = 0.079 seconds 00:05:42.952 00:05:42.952 real 0m0.306s 00:05:42.952 user 0m0.094s 00:05:42.953 sys 0m0.109s 00:05:42.953 03:50:25 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.953 03:50:25 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:42.953 ************************************ 00:05:42.953 END TEST env_mem_callbacks 00:05:42.953 ************************************ 00:05:42.953 00:05:42.953 real 0m10.091s 00:05:42.953 user 0m8.131s 00:05:42.953 sys 0m1.588s 00:05:42.953 03:50:25 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.953 ************************************ 00:05:42.953 END TEST env 00:05:42.953 ************************************ 00:05:42.953 03:50:25 env -- common/autotest_common.sh@10 -- # set +x 00:05:43.213 03:50:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.213 03:50:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:43.213 03:50:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:43.213 03:50:25 -- common/autotest_common.sh@10 -- # set +x 00:05:43.213 ************************************ 00:05:43.213 START TEST rpc 00:05:43.213 ************************************ 00:05:43.213 03:50:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:43.213 * Looking for test storage... 00:05:43.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:43.213 03:50:25 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.213 03:50:25 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.213 03:50:25 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.213 03:50:25 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.213 03:50:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.213 03:50:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.213 03:50:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.213 03:50:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.213 03:50:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.213 03:50:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.213 03:50:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.213 03:50:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.213 03:50:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.473 03:50:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.473 03:50:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.473 03:50:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:43.473 03:50:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:43.473 03:50:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.473 03:50:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.473 03:50:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:43.473 03:50:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:43.473 03:50:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.473 03:50:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:43.473 03:50:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.473 03:50:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:43.473 03:50:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:43.473 03:50:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.473 03:50:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:43.473 03:50:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.473 03:50:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.473 03:50:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.473 03:50:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.473 --rc genhtml_branch_coverage=1 00:05:43.473 --rc genhtml_function_coverage=1 00:05:43.473 --rc genhtml_legend=1 00:05:43.473 --rc geninfo_all_blocks=1 00:05:43.473 --rc geninfo_unexecuted_blocks=1 00:05:43.473 00:05:43.473 ' 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.473 --rc genhtml_branch_coverage=1 00:05:43.473 --rc genhtml_function_coverage=1 00:05:43.473 --rc genhtml_legend=1 00:05:43.473 --rc geninfo_all_blocks=1 00:05:43.473 --rc geninfo_unexecuted_blocks=1 00:05:43.473 00:05:43.473 ' 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.473 --rc genhtml_branch_coverage=1 00:05:43.473 --rc genhtml_function_coverage=1 00:05:43.473 --rc genhtml_legend=1 00:05:43.473 --rc geninfo_all_blocks=1 00:05:43.473 --rc geninfo_unexecuted_blocks=1 00:05:43.473 00:05:43.473 ' 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.473 --rc genhtml_branch_coverage=1 00:05:43.473 --rc genhtml_function_coverage=1 00:05:43.473 --rc genhtml_legend=1 00:05:43.473 --rc geninfo_all_blocks=1 00:05:43.473 --rc geninfo_unexecuted_blocks=1 00:05:43.473 00:05:43.473 ' 00:05:43.473 03:50:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57733 00:05:43.473 03:50:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:43.473 03:50:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.473 03:50:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57733 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 57733 ']' 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.473 03:50:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.473 [2024-12-07 03:50:26.082328] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:05:43.473 [2024-12-07 03:50:26.082454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:05:43.733 [2024-12-07 03:50:26.265455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.733 [2024-12-07 03:50:26.370416] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:43.733 [2024-12-07 03:50:26.370477] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57733' to capture a snapshot of events at runtime. 00:05:43.733 [2024-12-07 03:50:26.370490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:43.733 [2024-12-07 03:50:26.370504] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:43.733 [2024-12-07 03:50:26.370514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57733 for offline analysis/debug. 00:05:43.733 [2024-12-07 03:50:26.371783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.672 03:50:27 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.672 03:50:27 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:44.672 03:50:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.672 03:50:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:44.672 03:50:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:44.672 03:50:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:44.672 03:50:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.672 03:50:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.672 03:50:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 ************************************ 00:05:44.672 START TEST rpc_integrity 00:05:44.672 ************************************ 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:44.672 { 00:05:44.672 "name": "Malloc0", 00:05:44.672 "aliases": [ 00:05:44.672 "69311e4a-ba5a-46a2-8a31-1ea48fda1df4" 00:05:44.672 ], 00:05:44.672 "product_name": "Malloc disk", 00:05:44.672 "block_size": 512, 00:05:44.672 "num_blocks": 16384, 00:05:44.672 "uuid": "69311e4a-ba5a-46a2-8a31-1ea48fda1df4", 00:05:44.672 "assigned_rate_limits": { 00:05:44.672 "rw_ios_per_sec": 0, 00:05:44.672 "rw_mbytes_per_sec": 0, 00:05:44.672 "r_mbytes_per_sec": 0, 00:05:44.672 "w_mbytes_per_sec": 0 00:05:44.672 }, 00:05:44.672 "claimed": false, 00:05:44.672 "zoned": false, 00:05:44.672 "supported_io_types": { 00:05:44.672 "read": true, 00:05:44.672 "write": true, 00:05:44.672 "unmap": true, 00:05:44.672 "flush": true, 00:05:44.672 "reset": true, 00:05:44.672 "nvme_admin": false, 00:05:44.672 "nvme_io": false, 00:05:44.672 "nvme_io_md": false, 00:05:44.672 "write_zeroes": true, 00:05:44.672 "zcopy": true, 00:05:44.672 "get_zone_info": false, 00:05:44.672 "zone_management": false, 00:05:44.672 "zone_append": false, 00:05:44.672 "compare": false, 00:05:44.672 "compare_and_write": false, 00:05:44.672 "abort": true, 00:05:44.672 "seek_hole": false, 00:05:44.672 "seek_data": false, 00:05:44.672 "copy": true, 00:05:44.672 "nvme_iov_md": false 00:05:44.672 }, 00:05:44.672 "memory_domains": [ 00:05:44.672 { 00:05:44.672 "dma_device_id": "system", 00:05:44.672 "dma_device_type": 1 00:05:44.672 }, 00:05:44.672 { 00:05:44.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.672 "dma_device_type": 2 00:05:44.672 } 00:05:44.672 ], 00:05:44.672 "driver_specific": {} 00:05:44.672 } 00:05:44.672 ]' 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.672 [2024-12-07 03:50:27.370525] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:44.672 [2024-12-07 03:50:27.370586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:44.672 [2024-12-07 03:50:27.370613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:44.672 [2024-12-07 03:50:27.370632] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:44.672 [2024-12-07 03:50:27.373272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:44.672 [2024-12-07 03:50:27.373319] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:44.672 Passthru0 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.672 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.672 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.932 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.932 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:44.932 { 00:05:44.932 "name": "Malloc0", 00:05:44.932 "aliases": [ 00:05:44.932 "69311e4a-ba5a-46a2-8a31-1ea48fda1df4" 00:05:44.932 ], 00:05:44.932 "product_name": "Malloc disk", 00:05:44.932 "block_size": 512, 00:05:44.932 "num_blocks": 16384, 00:05:44.932 "uuid": "69311e4a-ba5a-46a2-8a31-1ea48fda1df4", 00:05:44.932 "assigned_rate_limits": { 00:05:44.932 "rw_ios_per_sec": 0, 00:05:44.932 "rw_mbytes_per_sec": 0, 00:05:44.932 "r_mbytes_per_sec": 0, 00:05:44.932 "w_mbytes_per_sec": 0 00:05:44.932 }, 00:05:44.932 "claimed": true, 00:05:44.932 "claim_type": "exclusive_write", 00:05:44.932 "zoned": false, 00:05:44.932 "supported_io_types": { 00:05:44.932 "read": true, 00:05:44.932 "write": true, 00:05:44.932 "unmap": true, 00:05:44.932 "flush": true, 00:05:44.932 "reset": true, 00:05:44.932 "nvme_admin": false, 00:05:44.932 "nvme_io": false, 00:05:44.932 "nvme_io_md": false, 00:05:44.932 "write_zeroes": true, 00:05:44.932 "zcopy": true, 00:05:44.932 "get_zone_info": false, 00:05:44.932 "zone_management": false, 00:05:44.932 "zone_append": false, 00:05:44.932 "compare": false, 00:05:44.932 "compare_and_write": false, 00:05:44.932 "abort": true, 00:05:44.932 "seek_hole": false, 00:05:44.932 "seek_data": false, 00:05:44.932 "copy": true, 00:05:44.932 "nvme_iov_md": false 00:05:44.932 }, 00:05:44.932 "memory_domains": [ 00:05:44.932 { 00:05:44.932 "dma_device_id": "system", 00:05:44.932 "dma_device_type": 1 00:05:44.932 }, 00:05:44.932 { 00:05:44.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.932 "dma_device_type": 2 00:05:44.932 } 00:05:44.932 ], 00:05:44.932 "driver_specific": {} 00:05:44.932 }, 00:05:44.932 { 00:05:44.932 "name": "Passthru0", 00:05:44.932 "aliases": [ 00:05:44.932 "82b29224-2500-53a2-b5e8-89784dfe274d" 00:05:44.932 ], 00:05:44.932 "product_name": "passthru", 00:05:44.932 "block_size": 512, 00:05:44.932 "num_blocks": 16384, 00:05:44.932 "uuid": "82b29224-2500-53a2-b5e8-89784dfe274d", 00:05:44.932 "assigned_rate_limits": { 00:05:44.932 "rw_ios_per_sec": 0, 00:05:44.932 "rw_mbytes_per_sec": 0, 00:05:44.932 "r_mbytes_per_sec": 0, 00:05:44.932 "w_mbytes_per_sec": 0 00:05:44.932 }, 00:05:44.932 "claimed": false, 00:05:44.932 "zoned": false, 00:05:44.932 "supported_io_types": { 00:05:44.932 "read": true, 00:05:44.932 "write": true, 00:05:44.932 "unmap": true, 00:05:44.932 "flush": true, 00:05:44.932 "reset": true, 00:05:44.932 "nvme_admin": false, 00:05:44.932 "nvme_io": false, 00:05:44.932 "nvme_io_md": false, 00:05:44.932 "write_zeroes": true, 00:05:44.932 "zcopy": true, 00:05:44.933 "get_zone_info": false, 00:05:44.933 "zone_management": false, 00:05:44.933 "zone_append": false, 00:05:44.933 "compare": false, 00:05:44.933 "compare_and_write": false, 00:05:44.933 "abort": true, 00:05:44.933 "seek_hole": false, 00:05:44.933 "seek_data": false, 00:05:44.933 "copy": true, 00:05:44.933 "nvme_iov_md": false 00:05:44.933 }, 00:05:44.933 "memory_domains": [ 00:05:44.933 { 00:05:44.933 "dma_device_id": "system", 00:05:44.933 "dma_device_type": 1 00:05:44.933 }, 00:05:44.933 { 00:05:44.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:44.933 "dma_device_type": 2 00:05:44.933 } 00:05:44.933 ], 00:05:44.933 "driver_specific": { 00:05:44.933 "passthru": { 00:05:44.933 "name": "Passthru0", 00:05:44.933 "base_bdev_name": "Malloc0" 00:05:44.933 } 00:05:44.933 } 00:05:44.933 } 00:05:44.933 ]' 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:44.933 ************************************ 00:05:44.933 END TEST rpc_integrity 00:05:44.933 ************************************ 00:05:44.933 03:50:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:44.933 00:05:44.933 real 0m0.338s 00:05:44.933 user 0m0.190s 00:05:44.933 sys 0m0.058s 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.933 03:50:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 03:50:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:44.933 03:50:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.933 03:50:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.933 03:50:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 ************************************ 00:05:44.933 START TEST rpc_plugins 00:05:44.933 ************************************ 00:05:44.933 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:44.933 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:44.933 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.933 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:44.933 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.933 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:44.933 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:44.933 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.933 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:45.193 { 00:05:45.193 "name": "Malloc1", 00:05:45.193 "aliases": [ 00:05:45.193 "3f7ba7e1-a5a5-4e37-af23-df4c89bbe5e9" 00:05:45.193 ], 00:05:45.193 "product_name": "Malloc disk", 00:05:45.193 "block_size": 4096, 00:05:45.193 "num_blocks": 256, 00:05:45.193 "uuid": "3f7ba7e1-a5a5-4e37-af23-df4c89bbe5e9", 00:05:45.193 "assigned_rate_limits": { 00:05:45.193 "rw_ios_per_sec": 0, 00:05:45.193 "rw_mbytes_per_sec": 0, 00:05:45.193 "r_mbytes_per_sec": 0, 00:05:45.193 "w_mbytes_per_sec": 0 00:05:45.193 }, 00:05:45.193 "claimed": false, 00:05:45.193 "zoned": false, 00:05:45.193 "supported_io_types": { 00:05:45.193 "read": true, 00:05:45.193 "write": true, 00:05:45.193 "unmap": true, 00:05:45.193 "flush": true, 00:05:45.193 "reset": true, 00:05:45.193 "nvme_admin": false, 00:05:45.193 "nvme_io": false, 00:05:45.193 "nvme_io_md": false, 00:05:45.193 "write_zeroes": true, 00:05:45.193 "zcopy": true, 00:05:45.193 "get_zone_info": false, 00:05:45.193 "zone_management": false, 00:05:45.193 "zone_append": false, 00:05:45.193 "compare": false, 00:05:45.193 "compare_and_write": false, 00:05:45.193 "abort": true, 00:05:45.193 "seek_hole": false, 00:05:45.193 "seek_data": false, 00:05:45.193 "copy": true, 00:05:45.193 "nvme_iov_md": false 00:05:45.193 }, 00:05:45.193 "memory_domains": [ 00:05:45.193 { 00:05:45.193 "dma_device_id": "system", 00:05:45.193 "dma_device_type": 1 00:05:45.193 }, 00:05:45.193 { 00:05:45.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.193 "dma_device_type": 2 00:05:45.193 } 00:05:45.193 ], 00:05:45.193 "driver_specific": {} 00:05:45.193 } 00:05:45.193 ]' 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:45.193 ************************************ 00:05:45.193 END TEST rpc_plugins 00:05:45.193 ************************************ 00:05:45.193 03:50:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:45.193 00:05:45.193 real 0m0.163s 00:05:45.193 user 0m0.095s 00:05:45.193 sys 0m0.029s 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.193 03:50:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:45.193 03:50:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:45.193 03:50:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.193 03:50:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.193 03:50:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.193 ************************************ 00:05:45.193 START TEST rpc_trace_cmd_test 00:05:45.193 ************************************ 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.193 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:45.193 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57733", 00:05:45.193 "tpoint_group_mask": "0x8", 00:05:45.193 "iscsi_conn": { 00:05:45.193 "mask": "0x2", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "scsi": { 00:05:45.193 "mask": "0x4", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "bdev": { 00:05:45.193 "mask": "0x8", 00:05:45.193 "tpoint_mask": "0xffffffffffffffff" 00:05:45.193 }, 00:05:45.193 "nvmf_rdma": { 00:05:45.193 "mask": "0x10", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "nvmf_tcp": { 00:05:45.193 "mask": "0x20", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "ftl": { 00:05:45.193 "mask": "0x40", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "blobfs": { 00:05:45.193 "mask": "0x80", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "dsa": { 00:05:45.193 "mask": "0x200", 00:05:45.193 "tpoint_mask": "0x0" 00:05:45.193 }, 00:05:45.193 "thread": { 00:05:45.194 "mask": "0x400", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "nvme_pcie": { 00:05:45.194 "mask": "0x800", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "iaa": { 00:05:45.194 "mask": "0x1000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "nvme_tcp": { 00:05:45.194 "mask": "0x2000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "bdev_nvme": { 00:05:45.194 "mask": "0x4000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "sock": { 00:05:45.194 "mask": "0x8000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "blob": { 00:05:45.194 "mask": "0x10000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "bdev_raid": { 00:05:45.194 "mask": "0x20000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 }, 00:05:45.194 "scheduler": { 00:05:45.194 "mask": "0x40000", 00:05:45.194 "tpoint_mask": "0x0" 00:05:45.194 } 00:05:45.194 }' 00:05:45.194 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:45.453 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:45.453 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:45.453 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:45.453 03:50:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:45.453 ************************************ 00:05:45.453 END TEST rpc_trace_cmd_test 00:05:45.453 ************************************ 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:45.453 00:05:45.453 real 0m0.250s 00:05:45.453 user 0m0.201s 00:05:45.453 sys 0m0.042s 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.453 03:50:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:45.453 03:50:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:45.453 03:50:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:45.453 03:50:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:45.453 03:50:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.453 03:50:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.453 03:50:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.453 ************************************ 00:05:45.453 START TEST rpc_daemon_integrity 00:05:45.453 ************************************ 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:45.713 { 00:05:45.713 "name": "Malloc2", 00:05:45.713 "aliases": [ 00:05:45.713 "4c7b5964-a334-41a9-8c63-199780dabcf3" 00:05:45.713 ], 00:05:45.713 "product_name": "Malloc disk", 00:05:45.713 "block_size": 512, 00:05:45.713 "num_blocks": 16384, 00:05:45.713 "uuid": "4c7b5964-a334-41a9-8c63-199780dabcf3", 00:05:45.713 "assigned_rate_limits": { 00:05:45.713 "rw_ios_per_sec": 0, 00:05:45.713 "rw_mbytes_per_sec": 0, 00:05:45.713 "r_mbytes_per_sec": 0, 00:05:45.713 "w_mbytes_per_sec": 0 00:05:45.713 }, 00:05:45.713 "claimed": false, 00:05:45.713 "zoned": false, 00:05:45.713 "supported_io_types": { 00:05:45.713 "read": true, 00:05:45.713 "write": true, 00:05:45.713 "unmap": true, 00:05:45.713 "flush": true, 00:05:45.713 "reset": true, 00:05:45.713 "nvme_admin": false, 00:05:45.713 "nvme_io": false, 00:05:45.713 "nvme_io_md": false, 00:05:45.713 "write_zeroes": true, 00:05:45.713 "zcopy": true, 00:05:45.713 "get_zone_info": false, 00:05:45.713 "zone_management": false, 00:05:45.713 "zone_append": false, 00:05:45.713 "compare": false, 00:05:45.713 "compare_and_write": false, 00:05:45.713 "abort": true, 00:05:45.713 "seek_hole": false, 00:05:45.713 "seek_data": false, 00:05:45.713 "copy": true, 00:05:45.713 "nvme_iov_md": false 00:05:45.713 }, 00:05:45.713 "memory_domains": [ 00:05:45.713 { 00:05:45.713 "dma_device_id": "system", 00:05:45.713 "dma_device_type": 1 00:05:45.713 }, 00:05:45.713 { 00:05:45.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.713 "dma_device_type": 2 00:05:45.713 } 00:05:45.713 ], 00:05:45.713 "driver_specific": {} 00:05:45.713 } 00:05:45.713 ]' 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.713 [2024-12-07 03:50:28.349103] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:45.713 [2024-12-07 03:50:28.349160] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:45.713 [2024-12-07 03:50:28.349181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:45.713 [2024-12-07 03:50:28.349195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:45.713 [2024-12-07 03:50:28.351670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:45.713 [2024-12-07 03:50:28.351819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:45.713 Passthru0 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.713 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:45.713 { 00:05:45.713 "name": "Malloc2", 00:05:45.713 "aliases": [ 00:05:45.713 "4c7b5964-a334-41a9-8c63-199780dabcf3" 00:05:45.713 ], 00:05:45.713 "product_name": "Malloc disk", 00:05:45.713 "block_size": 512, 00:05:45.713 "num_blocks": 16384, 00:05:45.713 "uuid": "4c7b5964-a334-41a9-8c63-199780dabcf3", 00:05:45.713 "assigned_rate_limits": { 00:05:45.713 "rw_ios_per_sec": 0, 00:05:45.714 "rw_mbytes_per_sec": 0, 00:05:45.714 "r_mbytes_per_sec": 0, 00:05:45.714 "w_mbytes_per_sec": 0 00:05:45.714 }, 00:05:45.714 "claimed": true, 00:05:45.714 "claim_type": "exclusive_write", 00:05:45.714 "zoned": false, 00:05:45.714 "supported_io_types": { 00:05:45.714 "read": true, 00:05:45.714 "write": true, 00:05:45.714 "unmap": true, 00:05:45.714 "flush": true, 00:05:45.714 "reset": true, 00:05:45.714 "nvme_admin": false, 00:05:45.714 "nvme_io": false, 00:05:45.714 "nvme_io_md": false, 00:05:45.714 "write_zeroes": true, 00:05:45.714 "zcopy": true, 00:05:45.714 "get_zone_info": false, 00:05:45.714 "zone_management": false, 00:05:45.714 "zone_append": false, 00:05:45.714 "compare": false, 00:05:45.714 "compare_and_write": false, 00:05:45.714 "abort": true, 00:05:45.714 "seek_hole": false, 00:05:45.714 "seek_data": false, 00:05:45.714 "copy": true, 00:05:45.714 "nvme_iov_md": false 00:05:45.714 }, 00:05:45.714 "memory_domains": [ 00:05:45.714 { 00:05:45.714 "dma_device_id": "system", 00:05:45.714 "dma_device_type": 1 00:05:45.714 }, 00:05:45.714 { 00:05:45.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.714 "dma_device_type": 2 00:05:45.714 } 00:05:45.714 ], 00:05:45.714 "driver_specific": {} 00:05:45.714 }, 00:05:45.714 { 00:05:45.714 "name": "Passthru0", 00:05:45.714 "aliases": [ 00:05:45.714 "c210fc3c-0f4f-5a51-a1f3-0e33e4cd4700" 00:05:45.714 ], 00:05:45.714 "product_name": "passthru", 00:05:45.714 "block_size": 512, 00:05:45.714 "num_blocks": 16384, 00:05:45.714 "uuid": "c210fc3c-0f4f-5a51-a1f3-0e33e4cd4700", 00:05:45.714 "assigned_rate_limits": { 00:05:45.714 "rw_ios_per_sec": 0, 00:05:45.714 "rw_mbytes_per_sec": 0, 00:05:45.714 "r_mbytes_per_sec": 0, 00:05:45.714 "w_mbytes_per_sec": 0 00:05:45.714 }, 00:05:45.714 "claimed": false, 00:05:45.714 "zoned": false, 00:05:45.714 "supported_io_types": { 00:05:45.714 "read": true, 00:05:45.714 "write": true, 00:05:45.714 "unmap": true, 00:05:45.714 "flush": true, 00:05:45.714 "reset": true, 00:05:45.714 "nvme_admin": false, 00:05:45.714 "nvme_io": false, 00:05:45.714 "nvme_io_md": false, 00:05:45.714 "write_zeroes": true, 00:05:45.714 "zcopy": true, 00:05:45.714 "get_zone_info": false, 00:05:45.714 "zone_management": false, 00:05:45.714 "zone_append": false, 00:05:45.714 "compare": false, 00:05:45.714 "compare_and_write": false, 00:05:45.714 "abort": true, 00:05:45.714 "seek_hole": false, 00:05:45.714 "seek_data": false, 00:05:45.714 "copy": true, 00:05:45.714 "nvme_iov_md": false 00:05:45.714 }, 00:05:45.714 "memory_domains": [ 00:05:45.714 { 00:05:45.714 "dma_device_id": "system", 00:05:45.714 "dma_device_type": 1 00:05:45.714 }, 00:05:45.714 { 00:05:45.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:45.714 "dma_device_type": 2 00:05:45.714 } 00:05:45.714 ], 00:05:45.714 "driver_specific": { 00:05:45.714 "passthru": { 00:05:45.714 "name": "Passthru0", 00:05:45.714 "base_bdev_name": "Malloc2" 00:05:45.714 } 00:05:45.714 } 00:05:45.714 } 00:05:45.714 ]' 00:05:45.714 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:45.714 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:45.714 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:45.714 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.714 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:45.974 ************************************ 00:05:45.974 END TEST rpc_daemon_integrity 00:05:45.974 ************************************ 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:45.974 00:05:45.974 real 0m0.358s 00:05:45.974 user 0m0.187s 00:05:45.974 sys 0m0.065s 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.974 03:50:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:45.974 03:50:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:45.974 03:50:28 rpc -- rpc/rpc.sh@84 -- # killprocess 57733 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@954 -- # '[' -z 57733 ']' 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@958 -- # kill -0 57733 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@959 -- # uname 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57733 00:05:45.974 killing process with pid 57733 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57733' 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@973 -- # kill 57733 00:05:45.974 03:50:28 rpc -- common/autotest_common.sh@978 -- # wait 57733 00:05:48.510 00:05:48.510 real 0m5.307s 00:05:48.510 user 0m5.768s 00:05:48.510 sys 0m1.002s 00:05:48.510 03:50:31 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.510 ************************************ 00:05:48.510 END TEST rpc 00:05:48.510 ************************************ 00:05:48.510 03:50:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 03:50:31 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.510 03:50:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.510 03:50:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.510 03:50:31 -- common/autotest_common.sh@10 -- # set +x 00:05:48.510 ************************************ 00:05:48.510 START TEST skip_rpc 00:05:48.510 ************************************ 00:05:48.510 03:50:31 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:48.510 * Looking for test storage... 00:05:48.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:48.510 03:50:31 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.770 03:50:31 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.770 --rc genhtml_branch_coverage=1 00:05:48.770 --rc genhtml_function_coverage=1 00:05:48.770 --rc genhtml_legend=1 00:05:48.770 --rc geninfo_all_blocks=1 00:05:48.770 --rc geninfo_unexecuted_blocks=1 00:05:48.770 00:05:48.770 ' 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.770 --rc genhtml_branch_coverage=1 00:05:48.770 --rc genhtml_function_coverage=1 00:05:48.770 --rc genhtml_legend=1 00:05:48.770 --rc geninfo_all_blocks=1 00:05:48.770 --rc geninfo_unexecuted_blocks=1 00:05:48.770 00:05:48.770 ' 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.770 --rc genhtml_branch_coverage=1 00:05:48.770 --rc genhtml_function_coverage=1 00:05:48.770 --rc genhtml_legend=1 00:05:48.770 --rc geninfo_all_blocks=1 00:05:48.770 --rc geninfo_unexecuted_blocks=1 00:05:48.770 00:05:48.770 ' 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.770 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.770 --rc genhtml_branch_coverage=1 00:05:48.770 --rc genhtml_function_coverage=1 00:05:48.770 --rc genhtml_legend=1 00:05:48.770 --rc geninfo_all_blocks=1 00:05:48.770 --rc geninfo_unexecuted_blocks=1 00:05:48.770 00:05:48.770 ' 00:05:48.770 03:50:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:48.770 03:50:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:48.770 03:50:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.770 03:50:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.770 ************************************ 00:05:48.770 START TEST skip_rpc 00:05:48.770 ************************************ 00:05:48.770 03:50:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:48.770 03:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57962 00:05:48.770 03:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:48.770 03:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.770 03:50:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:48.770 [2024-12-07 03:50:31.463907] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:05:48.770 [2024-12-07 03:50:31.464196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57962 ] 00:05:49.030 [2024-12-07 03:50:31.646920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.289 [2024-12-07 03:50:31.769685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57962 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57962 ']' 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57962 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57962 00:05:54.568 killing process with pid 57962 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57962' 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57962 00:05:54.568 03:50:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57962 00:05:56.473 00:05:56.473 real 0m7.527s 00:05:56.473 user 0m7.024s 00:05:56.473 sys 0m0.418s 00:05:56.473 03:50:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.473 ************************************ 00:05:56.473 END TEST skip_rpc 00:05:56.473 ************************************ 00:05:56.473 03:50:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.473 03:50:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:56.473 03:50:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.473 03:50:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.473 03:50:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.473 ************************************ 00:05:56.473 START TEST skip_rpc_with_json 00:05:56.473 ************************************ 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58066 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58066 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58066 ']' 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.473 03:50:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:56.473 [2024-12-07 03:50:39.067232] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:05:56.473 [2024-12-07 03:50:39.067356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:05:56.733 [2024-12-07 03:50:39.249063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.733 [2024-12-07 03:50:39.353688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.762 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.762 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:57.762 03:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:57.762 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.762 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.762 [2024-12-07 03:50:40.200684] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:57.763 request: 00:05:57.763 { 00:05:57.763 "trtype": "tcp", 00:05:57.763 "method": "nvmf_get_transports", 00:05:57.763 "req_id": 1 00:05:57.763 } 00:05:57.763 Got JSON-RPC error response 00:05:57.763 response: 00:05:57.763 { 00:05:57.763 "code": -19, 00:05:57.763 "message": "No such device" 00:05:57.763 } 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.763 [2024-12-07 03:50:40.216774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.763 03:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:57.763 { 00:05:57.763 "subsystems": [ 00:05:57.763 { 00:05:57.763 "subsystem": "fsdev", 00:05:57.763 "config": [ 00:05:57.763 { 00:05:57.763 "method": "fsdev_set_opts", 00:05:57.763 "params": { 00:05:57.763 "fsdev_io_pool_size": 65535, 00:05:57.763 "fsdev_io_cache_size": 256 00:05:57.763 } 00:05:57.763 } 00:05:57.763 ] 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "subsystem": "keyring", 00:05:57.763 "config": [] 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "subsystem": "iobuf", 00:05:57.763 "config": [ 00:05:57.763 { 00:05:57.763 "method": "iobuf_set_options", 00:05:57.763 "params": { 00:05:57.763 "small_pool_count": 8192, 00:05:57.763 "large_pool_count": 1024, 00:05:57.763 "small_bufsize": 8192, 00:05:57.763 "large_bufsize": 135168, 00:05:57.763 "enable_numa": false 00:05:57.763 } 00:05:57.763 } 00:05:57.763 ] 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "subsystem": "sock", 00:05:57.763 "config": [ 00:05:57.763 { 00:05:57.763 "method": "sock_set_default_impl", 00:05:57.763 "params": { 00:05:57.763 "impl_name": "posix" 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "sock_impl_set_options", 00:05:57.763 "params": { 00:05:57.763 "impl_name": "ssl", 00:05:57.763 "recv_buf_size": 4096, 00:05:57.763 "send_buf_size": 4096, 00:05:57.763 "enable_recv_pipe": true, 00:05:57.763 "enable_quickack": false, 00:05:57.763 "enable_placement_id": 0, 00:05:57.763 "enable_zerocopy_send_server": true, 00:05:57.763 "enable_zerocopy_send_client": false, 00:05:57.763 "zerocopy_threshold": 0, 00:05:57.763 "tls_version": 0, 00:05:57.763 "enable_ktls": false 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "sock_impl_set_options", 00:05:57.763 "params": { 00:05:57.763 "impl_name": "posix", 00:05:57.763 "recv_buf_size": 2097152, 00:05:57.763 "send_buf_size": 2097152, 00:05:57.763 "enable_recv_pipe": true, 00:05:57.763 "enable_quickack": false, 00:05:57.763 "enable_placement_id": 0, 00:05:57.763 "enable_zerocopy_send_server": true, 00:05:57.763 "enable_zerocopy_send_client": false, 00:05:57.763 "zerocopy_threshold": 0, 00:05:57.763 "tls_version": 0, 00:05:57.763 "enable_ktls": false 00:05:57.763 } 00:05:57.763 } 00:05:57.763 ] 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "subsystem": "vmd", 00:05:57.763 "config": [] 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "subsystem": "accel", 00:05:57.763 "config": [ 00:05:57.763 { 00:05:57.763 "method": "accel_set_options", 00:05:57.763 "params": { 00:05:57.763 "small_cache_size": 128, 00:05:57.763 "large_cache_size": 16, 00:05:57.763 "task_count": 2048, 00:05:57.763 "sequence_count": 2048, 00:05:57.763 "buf_count": 2048 00:05:57.763 } 00:05:57.763 } 00:05:57.763 ] 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "subsystem": "bdev", 00:05:57.763 "config": [ 00:05:57.763 { 00:05:57.763 "method": "bdev_set_options", 00:05:57.763 "params": { 00:05:57.763 "bdev_io_pool_size": 65535, 00:05:57.763 "bdev_io_cache_size": 256, 00:05:57.763 "bdev_auto_examine": true, 00:05:57.763 "iobuf_small_cache_size": 128, 00:05:57.763 "iobuf_large_cache_size": 16 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "bdev_raid_set_options", 00:05:57.763 "params": { 00:05:57.763 "process_window_size_kb": 1024, 00:05:57.763 "process_max_bandwidth_mb_sec": 0 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "bdev_iscsi_set_options", 00:05:57.763 "params": { 00:05:57.763 "timeout_sec": 30 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "bdev_nvme_set_options", 00:05:57.763 "params": { 00:05:57.763 "action_on_timeout": "none", 00:05:57.763 "timeout_us": 0, 00:05:57.763 "timeout_admin_us": 0, 00:05:57.763 "keep_alive_timeout_ms": 10000, 00:05:57.763 "arbitration_burst": 0, 00:05:57.763 "low_priority_weight": 0, 00:05:57.763 "medium_priority_weight": 0, 00:05:57.763 "high_priority_weight": 0, 00:05:57.763 "nvme_adminq_poll_period_us": 10000, 00:05:57.763 "nvme_ioq_poll_period_us": 0, 00:05:57.763 "io_queue_requests": 0, 00:05:57.763 "delay_cmd_submit": true, 00:05:57.763 "transport_retry_count": 4, 00:05:57.763 "bdev_retry_count": 3, 00:05:57.763 "transport_ack_timeout": 0, 00:05:57.763 "ctrlr_loss_timeout_sec": 0, 00:05:57.763 "reconnect_delay_sec": 0, 00:05:57.763 "fast_io_fail_timeout_sec": 0, 00:05:57.763 "disable_auto_failback": false, 00:05:57.763 "generate_uuids": false, 00:05:57.763 "transport_tos": 0, 00:05:57.763 "nvme_error_stat": false, 00:05:57.763 "rdma_srq_size": 0, 00:05:57.763 "io_path_stat": false, 00:05:57.763 "allow_accel_sequence": false, 00:05:57.763 "rdma_max_cq_size": 0, 00:05:57.763 "rdma_cm_event_timeout_ms": 0, 00:05:57.763 "dhchap_digests": [ 00:05:57.763 "sha256", 00:05:57.763 "sha384", 00:05:57.763 "sha512" 00:05:57.763 ], 00:05:57.763 "dhchap_dhgroups": [ 00:05:57.763 "null", 00:05:57.763 "ffdhe2048", 00:05:57.763 "ffdhe3072", 00:05:57.763 "ffdhe4096", 00:05:57.763 "ffdhe6144", 00:05:57.763 "ffdhe8192" 00:05:57.763 ] 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "bdev_nvme_set_hotplug", 00:05:57.763 "params": { 00:05:57.763 "period_us": 100000, 00:05:57.763 "enable": false 00:05:57.763 } 00:05:57.763 }, 00:05:57.763 { 00:05:57.763 "method": "bdev_wait_for_examine" 00:05:57.763 } 00:05:57.763 ] 00:05:57.763 }, 00:05:57.764 { 00:05:57.764 "subsystem": "scsi", 00:05:57.764 "config": null 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "scheduler", 00:05:57.764 "config": [ 00:05:57.764 { 00:05:57.764 "method": "framework_set_scheduler", 00:05:57.764 "params": { 00:05:57.764 "name": "static" 00:05:57.764 } 00:05:57.764 } 00:05:57.764 ] 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "vhost_scsi", 00:05:57.764 "config": [] 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "vhost_blk", 00:05:57.764 "config": [] 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "ublk", 00:05:57.764 "config": [] 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "nbd", 00:05:57.764 "config": [] 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "nvmf", 00:05:57.764 "config": [ 00:05:57.764 { 00:05:57.764 "method": "nvmf_set_config", 00:05:57.764 "params": { 00:05:57.764 "discovery_filter": "match_any", 00:05:57.764 "admin_cmd_passthru": { 00:05:57.764 "identify_ctrlr": false 00:05:57.764 }, 00:05:57.764 "dhchap_digests": [ 00:05:57.764 "sha256", 00:05:57.764 "sha384", 00:05:57.764 "sha512" 00:05:57.764 ], 00:05:57.764 "dhchap_dhgroups": [ 00:05:57.764 "null", 00:05:57.764 "ffdhe2048", 00:05:57.764 "ffdhe3072", 00:05:57.764 "ffdhe4096", 00:05:57.764 "ffdhe6144", 00:05:57.764 "ffdhe8192" 00:05:57.764 ] 00:05:57.764 } 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "method": "nvmf_set_max_subsystems", 00:05:57.764 "params": { 00:05:57.764 "max_subsystems": 1024 00:05:57.764 } 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "method": "nvmf_set_crdt", 00:05:57.764 "params": { 00:05:57.764 "crdt1": 0, 00:05:57.764 "crdt2": 0, 00:05:57.764 "crdt3": 0 00:05:57.764 } 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "method": "nvmf_create_transport", 00:05:57.764 "params": { 00:05:57.764 "trtype": "TCP", 00:05:57.764 "max_queue_depth": 128, 00:05:57.764 "max_io_qpairs_per_ctrlr": 127, 00:05:57.764 "in_capsule_data_size": 4096, 00:05:57.764 "max_io_size": 131072, 00:05:57.764 "io_unit_size": 131072, 00:05:57.764 "max_aq_depth": 128, 00:05:57.764 "num_shared_buffers": 511, 00:05:57.764 "buf_cache_size": 4294967295, 00:05:57.764 "dif_insert_or_strip": false, 00:05:57.764 "zcopy": false, 00:05:57.764 "c2h_success": true, 00:05:57.764 "sock_priority": 0, 00:05:57.764 "abort_timeout_sec": 1, 00:05:57.764 "ack_timeout": 0, 00:05:57.764 "data_wr_pool_size": 0 00:05:57.764 } 00:05:57.764 } 00:05:57.764 ] 00:05:57.764 }, 00:05:57.764 { 00:05:57.764 "subsystem": "iscsi", 00:05:57.764 "config": [ 00:05:57.764 { 00:05:57.764 "method": "iscsi_set_options", 00:05:57.764 "params": { 00:05:57.764 "node_base": "iqn.2016-06.io.spdk", 00:05:57.764 "max_sessions": 128, 00:05:57.764 "max_connections_per_session": 2, 00:05:57.764 "max_queue_depth": 64, 00:05:57.764 "default_time2wait": 2, 00:05:57.764 "default_time2retain": 20, 00:05:57.764 "first_burst_length": 8192, 00:05:57.764 "immediate_data": true, 00:05:57.764 "allow_duplicated_isid": false, 00:05:57.764 "error_recovery_level": 0, 00:05:57.764 "nop_timeout": 60, 00:05:57.764 "nop_in_interval": 30, 00:05:57.764 "disable_chap": false, 00:05:57.764 "require_chap": false, 00:05:57.764 "mutual_chap": false, 00:05:57.764 "chap_group": 0, 00:05:57.764 "max_large_datain_per_connection": 64, 00:05:57.764 "max_r2t_per_connection": 4, 00:05:57.764 "pdu_pool_size": 36864, 00:05:57.764 "immediate_data_pool_size": 16384, 00:05:57.764 "data_out_pool_size": 2048 00:05:57.764 } 00:05:57.764 } 00:05:57.764 ] 00:05:57.764 } 00:05:57.764 ] 00:05:57.764 } 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58066 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58066 ']' 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58066 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58066 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58066' 00:05:57.764 killing process with pid 58066 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58066 00:05:57.764 03:50:40 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58066 00:06:00.355 03:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58122 00:06:00.355 03:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:00.355 03:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58122 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58122 ']' 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58122 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58122 00:06:05.629 killing process with pid 58122 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58122' 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58122 00:06:05.629 03:50:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58122 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:08.168 ************************************ 00:06:08.168 END TEST skip_rpc_with_json 00:06:08.168 ************************************ 00:06:08.168 00:06:08.168 real 0m11.334s 00:06:08.168 user 0m10.729s 00:06:08.168 sys 0m0.914s 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 03:50:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:08.168 03:50:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.168 03:50:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.168 03:50:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 ************************************ 00:06:08.168 START TEST skip_rpc_with_delay 00:06:08.168 ************************************ 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:08.168 [2024-12-07 03:50:50.484700] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:08.168 00:06:08.168 real 0m0.183s 00:06:08.168 user 0m0.100s 00:06:08.168 sys 0m0.081s 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.168 03:50:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 ************************************ 00:06:08.168 END TEST skip_rpc_with_delay 00:06:08.168 ************************************ 00:06:08.168 03:50:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:08.168 03:50:50 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:08.168 03:50:50 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:08.168 03:50:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.168 03:50:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.168 03:50:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 ************************************ 00:06:08.168 START TEST exit_on_failed_rpc_init 00:06:08.168 ************************************ 00:06:08.168 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:08.168 03:50:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58260 00:06:08.168 03:50:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.168 03:50:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58260 00:06:08.168 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58260 ']' 00:06:08.169 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.169 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.169 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.169 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.169 03:50:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.169 [2024-12-07 03:50:50.751339] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:08.169 [2024-12-07 03:50:50.751465] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58260 ] 00:06:08.429 [2024-12-07 03:50:50.934261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.429 [2024-12-07 03:50:51.046775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:09.369 03:50:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:09.369 [2024-12-07 03:50:52.025804] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:09.369 [2024-12-07 03:50:52.025946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58279 ] 00:06:09.629 [2024-12-07 03:50:52.210482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.629 [2024-12-07 03:50:52.328234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.629 [2024-12-07 03:50:52.328342] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:09.629 [2024-12-07 03:50:52.328359] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:09.629 [2024-12-07 03:50:52.328378] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.888 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:09.888 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.888 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:09.888 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:09.888 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:09.888 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58260 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58260 ']' 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58260 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.889 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58260 00:06:10.148 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.148 killing process with pid 58260 00:06:10.148 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.148 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58260' 00:06:10.148 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58260 00:06:10.148 03:50:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58260 00:06:12.685 00:06:12.685 real 0m4.410s 00:06:12.685 user 0m4.720s 00:06:12.685 sys 0m0.637s 00:06:12.685 03:50:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.685 03:50:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.685 ************************************ 00:06:12.685 END TEST exit_on_failed_rpc_init 00:06:12.685 ************************************ 00:06:12.685 03:50:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.685 00:06:12.685 real 0m23.993s 00:06:12.685 user 0m22.796s 00:06:12.685 sys 0m2.362s 00:06:12.685 03:50:55 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.685 03:50:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.685 ************************************ 00:06:12.685 END TEST skip_rpc 00:06:12.685 ************************************ 00:06:12.685 03:50:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:12.685 03:50:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.685 03:50:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.685 03:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:12.685 ************************************ 00:06:12.685 START TEST rpc_client 00:06:12.685 ************************************ 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:12.685 * Looking for test storage... 00:06:12.685 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.685 03:50:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.685 --rc genhtml_branch_coverage=1 00:06:12.685 --rc genhtml_function_coverage=1 00:06:12.685 --rc genhtml_legend=1 00:06:12.685 --rc geninfo_all_blocks=1 00:06:12.685 --rc geninfo_unexecuted_blocks=1 00:06:12.685 00:06:12.685 ' 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.685 --rc genhtml_branch_coverage=1 00:06:12.685 --rc genhtml_function_coverage=1 00:06:12.685 --rc genhtml_legend=1 00:06:12.685 --rc geninfo_all_blocks=1 00:06:12.685 --rc geninfo_unexecuted_blocks=1 00:06:12.685 00:06:12.685 ' 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.685 --rc genhtml_branch_coverage=1 00:06:12.685 --rc genhtml_function_coverage=1 00:06:12.685 --rc genhtml_legend=1 00:06:12.685 --rc geninfo_all_blocks=1 00:06:12.685 --rc geninfo_unexecuted_blocks=1 00:06:12.685 00:06:12.685 ' 00:06:12.685 03:50:55 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.685 --rc genhtml_branch_coverage=1 00:06:12.685 --rc genhtml_function_coverage=1 00:06:12.685 --rc genhtml_legend=1 00:06:12.685 --rc geninfo_all_blocks=1 00:06:12.685 --rc geninfo_unexecuted_blocks=1 00:06:12.685 00:06:12.685 ' 00:06:12.685 03:50:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:12.943 OK 00:06:12.943 03:50:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:12.943 00:06:12.943 real 0m0.272s 00:06:12.943 user 0m0.143s 00:06:12.943 sys 0m0.149s 00:06:12.943 03:50:55 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.943 03:50:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:12.943 ************************************ 00:06:12.943 END TEST rpc_client 00:06:12.943 ************************************ 00:06:12.943 03:50:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.943 03:50:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.943 03:50:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.943 03:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:12.943 ************************************ 00:06:12.943 START TEST json_config 00:06:12.943 ************************************ 00:06:12.943 03:50:55 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:12.943 03:50:55 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.943 03:50:55 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.943 03:50:55 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.943 03:50:55 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.944 03:50:55 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.944 03:50:55 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.944 03:50:55 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.944 03:50:55 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.944 03:50:55 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.944 03:50:55 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:12.944 03:50:55 json_config -- scripts/common.sh@345 -- # : 1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.944 03:50:55 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.944 03:50:55 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@353 -- # local d=1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.944 03:50:55 json_config -- scripts/common.sh@355 -- # echo 1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.944 03:50:55 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@353 -- # local d=2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.944 03:50:55 json_config -- scripts/common.sh@355 -- # echo 2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.944 03:50:55 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.944 03:50:55 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.944 03:50:55 json_config -- scripts/common.sh@368 -- # return 0 00:06:12.944 03:50:55 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.944 03:50:55 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.944 --rc genhtml_branch_coverage=1 00:06:12.944 --rc genhtml_function_coverage=1 00:06:12.944 --rc genhtml_legend=1 00:06:12.944 --rc geninfo_all_blocks=1 00:06:12.944 --rc geninfo_unexecuted_blocks=1 00:06:12.944 00:06:12.944 ' 00:06:12.944 03:50:55 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.944 --rc genhtml_branch_coverage=1 00:06:12.944 --rc genhtml_function_coverage=1 00:06:12.944 --rc genhtml_legend=1 00:06:12.944 --rc geninfo_all_blocks=1 00:06:12.944 --rc geninfo_unexecuted_blocks=1 00:06:12.944 00:06:12.944 ' 00:06:12.944 03:50:55 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.944 --rc genhtml_branch_coverage=1 00:06:12.944 --rc genhtml_function_coverage=1 00:06:12.944 --rc genhtml_legend=1 00:06:12.944 --rc geninfo_all_blocks=1 00:06:12.944 --rc geninfo_unexecuted_blocks=1 00:06:12.944 00:06:12.944 ' 00:06:12.944 03:50:55 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.944 --rc genhtml_branch_coverage=1 00:06:12.944 --rc genhtml_function_coverage=1 00:06:12.944 --rc genhtml_legend=1 00:06:12.944 --rc geninfo_all_blocks=1 00:06:12.944 --rc geninfo_unexecuted_blocks=1 00:06:12.944 00:06:12.944 ' 00:06:12.944 03:50:55 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:12.944 03:50:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:099867bc-932e-4148-8a2f-ef14cc589e12 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=099867bc-932e-4148-8a2f-ef14cc589e12 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.203 03:50:55 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.203 03:50:55 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.203 03:50:55 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.203 03:50:55 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.203 03:50:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.203 03:50:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.203 03:50:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.203 03:50:55 json_config -- paths/export.sh@5 -- # export PATH 00:06:13.203 03:50:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@51 -- # : 0 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.203 03:50:55 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:13.203 WARNING: No tests are enabled so not running JSON configuration tests 00:06:13.203 03:50:55 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:13.203 00:06:13.203 real 0m0.200s 00:06:13.203 user 0m0.124s 00:06:13.203 sys 0m0.083s 00:06:13.203 03:50:55 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.203 03:50:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:13.203 ************************************ 00:06:13.203 END TEST json_config 00:06:13.203 ************************************ 00:06:13.203 03:50:55 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.203 03:50:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:13.203 03:50:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:13.203 03:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:13.203 ************************************ 00:06:13.203 START TEST json_config_extra_key 00:06:13.203 ************************************ 00:06:13.203 03:50:55 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:13.203 03:50:55 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:13.203 03:50:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:06:13.203 03:50:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:13.203 03:50:55 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:13.203 03:50:55 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:13.204 03:50:55 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:13.204 03:50:55 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:13.204 03:50:55 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:13.204 03:50:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:13.204 03:50:55 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:13.204 03:50:55 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:13.463 03:50:55 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:13.463 03:50:55 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:13.463 03:50:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:13.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.463 --rc genhtml_branch_coverage=1 00:06:13.463 --rc genhtml_function_coverage=1 00:06:13.463 --rc genhtml_legend=1 00:06:13.463 --rc geninfo_all_blocks=1 00:06:13.463 --rc geninfo_unexecuted_blocks=1 00:06:13.463 00:06:13.463 ' 00:06:13.463 03:50:55 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:13.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.463 --rc genhtml_branch_coverage=1 00:06:13.463 --rc genhtml_function_coverage=1 00:06:13.463 --rc genhtml_legend=1 00:06:13.463 --rc geninfo_all_blocks=1 00:06:13.463 --rc geninfo_unexecuted_blocks=1 00:06:13.463 00:06:13.463 ' 00:06:13.463 03:50:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:13.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.463 --rc genhtml_branch_coverage=1 00:06:13.463 --rc genhtml_function_coverage=1 00:06:13.463 --rc genhtml_legend=1 00:06:13.463 --rc geninfo_all_blocks=1 00:06:13.463 --rc geninfo_unexecuted_blocks=1 00:06:13.463 00:06:13.463 ' 00:06:13.463 03:50:55 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:13.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:13.463 --rc genhtml_branch_coverage=1 00:06:13.463 --rc genhtml_function_coverage=1 00:06:13.463 --rc genhtml_legend=1 00:06:13.463 --rc geninfo_all_blocks=1 00:06:13.463 --rc geninfo_unexecuted_blocks=1 00:06:13.463 00:06:13.463 ' 00:06:13.463 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:13.463 03:50:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:099867bc-932e-4148-8a2f-ef14cc589e12 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=099867bc-932e-4148-8a2f-ef14cc589e12 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:13.464 03:50:55 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:13.464 03:50:55 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:13.464 03:50:55 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:13.464 03:50:55 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:13.464 03:50:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.464 03:50:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.464 03:50:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.464 03:50:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:13.464 03:50:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:13.464 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:13.464 03:50:55 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:13.464 INFO: launching applications... 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:13.464 03:50:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58489 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:13.464 Waiting for target to run... 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:13.464 03:50:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58489 /var/tmp/spdk_tgt.sock 00:06:13.464 03:50:55 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58489 ']' 00:06:13.464 03:50:55 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:13.464 03:50:55 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:13.464 03:50:55 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:13.464 03:50:55 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.464 03:50:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:13.464 [2024-12-07 03:50:56.090684] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:13.464 [2024-12-07 03:50:56.090816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58489 ] 00:06:14.030 [2024-12-07 03:50:56.489193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.030 [2024-12-07 03:50:56.599966] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.963 03:50:57 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:14.963 03:50:57 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:14.963 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:14.963 INFO: shutting down applications... 00:06:14.963 03:50:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:14.963 03:50:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58489 ]] 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58489 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:14.963 03:50:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.252 03:50:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.252 03:50:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.252 03:50:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:15.252 03:50:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:15.817 03:50:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:15.817 03:50:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:15.817 03:50:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:15.817 03:50:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.401 03:50:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.401 03:50:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.401 03:50:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:16.401 03:50:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:16.966 03:50:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:16.966 03:50:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:16.966 03:50:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:16.966 03:50:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.225 03:50:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.225 03:50:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.225 03:50:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:17.225 03:50:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58489 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:17.793 SPDK target shutdown done 00:06:17.793 03:51:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:17.793 Success 00:06:17.793 03:51:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:17.793 00:06:17.793 real 0m4.653s 00:06:17.793 user 0m4.158s 00:06:17.793 sys 0m0.594s 00:06:17.793 03:51:00 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.793 03:51:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.793 ************************************ 00:06:17.793 END TEST json_config_extra_key 00:06:17.793 ************************************ 00:06:17.793 03:51:00 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:17.793 03:51:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.793 03:51:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.793 03:51:00 -- common/autotest_common.sh@10 -- # set +x 00:06:17.793 ************************************ 00:06:17.793 START TEST alias_rpc 00:06:17.793 ************************************ 00:06:17.793 03:51:00 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:18.053 * Looking for test storage... 00:06:18.053 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.053 03:51:00 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:18.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.053 --rc genhtml_branch_coverage=1 00:06:18.053 --rc genhtml_function_coverage=1 00:06:18.053 --rc genhtml_legend=1 00:06:18.053 --rc geninfo_all_blocks=1 00:06:18.053 --rc geninfo_unexecuted_blocks=1 00:06:18.053 00:06:18.053 ' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:18.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.053 --rc genhtml_branch_coverage=1 00:06:18.053 --rc genhtml_function_coverage=1 00:06:18.053 --rc genhtml_legend=1 00:06:18.053 --rc geninfo_all_blocks=1 00:06:18.053 --rc geninfo_unexecuted_blocks=1 00:06:18.053 00:06:18.053 ' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:18.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.053 --rc genhtml_branch_coverage=1 00:06:18.053 --rc genhtml_function_coverage=1 00:06:18.053 --rc genhtml_legend=1 00:06:18.053 --rc geninfo_all_blocks=1 00:06:18.053 --rc geninfo_unexecuted_blocks=1 00:06:18.053 00:06:18.053 ' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:18.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.053 --rc genhtml_branch_coverage=1 00:06:18.053 --rc genhtml_function_coverage=1 00:06:18.053 --rc genhtml_legend=1 00:06:18.053 --rc geninfo_all_blocks=1 00:06:18.053 --rc geninfo_unexecuted_blocks=1 00:06:18.053 00:06:18.053 ' 00:06:18.053 03:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:18.053 03:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58595 00:06:18.053 03:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:18.053 03:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58595 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58595 ']' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.053 03:51:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.313 [2024-12-07 03:51:00.810886] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:18.313 [2024-12-07 03:51:00.811049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58595 ] 00:06:18.313 [2024-12-07 03:51:00.993722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.573 [2024-12-07 03:51:01.110373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.511 03:51:01 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:19.511 03:51:01 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:19.511 03:51:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:19.511 03:51:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58595 00:06:19.511 03:51:02 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58595 ']' 00:06:19.511 03:51:02 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58595 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58595 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.770 killing process with pid 58595 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58595' 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@973 -- # kill 58595 00:06:19.770 03:51:02 alias_rpc -- common/autotest_common.sh@978 -- # wait 58595 00:06:22.308 00:06:22.308 real 0m4.219s 00:06:22.308 user 0m4.181s 00:06:22.308 sys 0m0.608s 00:06:22.308 03:51:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.309 ************************************ 00:06:22.309 END TEST alias_rpc 00:06:22.309 ************************************ 00:06:22.309 03:51:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.309 03:51:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:22.309 03:51:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:22.309 03:51:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.309 03:51:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.309 03:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:22.309 ************************************ 00:06:22.309 START TEST spdkcli_tcp 00:06:22.309 ************************************ 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:22.309 * Looking for test storage... 00:06:22.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.309 03:51:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.309 --rc genhtml_branch_coverage=1 00:06:22.309 --rc genhtml_function_coverage=1 00:06:22.309 --rc genhtml_legend=1 00:06:22.309 --rc geninfo_all_blocks=1 00:06:22.309 --rc geninfo_unexecuted_blocks=1 00:06:22.309 00:06:22.309 ' 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.309 --rc genhtml_branch_coverage=1 00:06:22.309 --rc genhtml_function_coverage=1 00:06:22.309 --rc genhtml_legend=1 00:06:22.309 --rc geninfo_all_blocks=1 00:06:22.309 --rc geninfo_unexecuted_blocks=1 00:06:22.309 00:06:22.309 ' 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.309 --rc genhtml_branch_coverage=1 00:06:22.309 --rc genhtml_function_coverage=1 00:06:22.309 --rc genhtml_legend=1 00:06:22.309 --rc geninfo_all_blocks=1 00:06:22.309 --rc geninfo_unexecuted_blocks=1 00:06:22.309 00:06:22.309 ' 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:22.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.309 --rc genhtml_branch_coverage=1 00:06:22.309 --rc genhtml_function_coverage=1 00:06:22.309 --rc genhtml_legend=1 00:06:22.309 --rc geninfo_all_blocks=1 00:06:22.309 --rc geninfo_unexecuted_blocks=1 00:06:22.309 00:06:22.309 ' 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58707 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:22.309 03:51:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58707 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58707 ']' 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.309 03:51:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.591 [2024-12-07 03:51:05.098878] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:22.591 [2024-12-07 03:51:05.099016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58707 ] 00:06:22.591 [2024-12-07 03:51:05.283336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.849 [2024-12-07 03:51:05.402288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.849 [2024-12-07 03:51:05.402320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.785 03:51:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.785 03:51:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:23.785 03:51:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:23.785 03:51:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58725 00:06:23.785 03:51:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:23.785 [ 00:06:23.785 "bdev_malloc_delete", 00:06:23.785 "bdev_malloc_create", 00:06:23.785 "bdev_null_resize", 00:06:23.785 "bdev_null_delete", 00:06:23.785 "bdev_null_create", 00:06:23.785 "bdev_nvme_cuse_unregister", 00:06:23.785 "bdev_nvme_cuse_register", 00:06:23.785 "bdev_opal_new_user", 00:06:23.785 "bdev_opal_set_lock_state", 00:06:23.785 "bdev_opal_delete", 00:06:23.785 "bdev_opal_get_info", 00:06:23.785 "bdev_opal_create", 00:06:23.785 "bdev_nvme_opal_revert", 00:06:23.785 "bdev_nvme_opal_init", 00:06:23.785 "bdev_nvme_send_cmd", 00:06:23.785 "bdev_nvme_set_keys", 00:06:23.785 "bdev_nvme_get_path_iostat", 00:06:23.785 "bdev_nvme_get_mdns_discovery_info", 00:06:23.785 "bdev_nvme_stop_mdns_discovery", 00:06:23.785 "bdev_nvme_start_mdns_discovery", 00:06:23.785 "bdev_nvme_set_multipath_policy", 00:06:23.785 "bdev_nvme_set_preferred_path", 00:06:23.785 "bdev_nvme_get_io_paths", 00:06:23.785 "bdev_nvme_remove_error_injection", 00:06:23.785 "bdev_nvme_add_error_injection", 00:06:23.785 "bdev_nvme_get_discovery_info", 00:06:23.785 "bdev_nvme_stop_discovery", 00:06:23.785 "bdev_nvme_start_discovery", 00:06:23.785 "bdev_nvme_get_controller_health_info", 00:06:23.785 "bdev_nvme_disable_controller", 00:06:23.785 "bdev_nvme_enable_controller", 00:06:23.785 "bdev_nvme_reset_controller", 00:06:23.785 "bdev_nvme_get_transport_statistics", 00:06:23.785 "bdev_nvme_apply_firmware", 00:06:23.785 "bdev_nvme_detach_controller", 00:06:23.785 "bdev_nvme_get_controllers", 00:06:23.785 "bdev_nvme_attach_controller", 00:06:23.785 "bdev_nvme_set_hotplug", 00:06:23.785 "bdev_nvme_set_options", 00:06:23.785 "bdev_passthru_delete", 00:06:23.785 "bdev_passthru_create", 00:06:23.785 "bdev_lvol_set_parent_bdev", 00:06:23.785 "bdev_lvol_set_parent", 00:06:23.785 "bdev_lvol_check_shallow_copy", 00:06:23.785 "bdev_lvol_start_shallow_copy", 00:06:23.785 "bdev_lvol_grow_lvstore", 00:06:23.785 "bdev_lvol_get_lvols", 00:06:23.785 "bdev_lvol_get_lvstores", 00:06:23.785 "bdev_lvol_delete", 00:06:23.785 "bdev_lvol_set_read_only", 00:06:23.785 "bdev_lvol_resize", 00:06:23.785 "bdev_lvol_decouple_parent", 00:06:23.785 "bdev_lvol_inflate", 00:06:23.785 "bdev_lvol_rename", 00:06:23.785 "bdev_lvol_clone_bdev", 00:06:23.785 "bdev_lvol_clone", 00:06:23.786 "bdev_lvol_snapshot", 00:06:23.786 "bdev_lvol_create", 00:06:23.786 "bdev_lvol_delete_lvstore", 00:06:23.786 "bdev_lvol_rename_lvstore", 00:06:23.786 "bdev_lvol_create_lvstore", 00:06:23.786 "bdev_raid_set_options", 00:06:23.786 "bdev_raid_remove_base_bdev", 00:06:23.786 "bdev_raid_add_base_bdev", 00:06:23.786 "bdev_raid_delete", 00:06:23.786 "bdev_raid_create", 00:06:23.786 "bdev_raid_get_bdevs", 00:06:23.786 "bdev_error_inject_error", 00:06:23.786 "bdev_error_delete", 00:06:23.786 "bdev_error_create", 00:06:23.786 "bdev_split_delete", 00:06:23.786 "bdev_split_create", 00:06:23.786 "bdev_delay_delete", 00:06:23.786 "bdev_delay_create", 00:06:23.786 "bdev_delay_update_latency", 00:06:23.786 "bdev_zone_block_delete", 00:06:23.786 "bdev_zone_block_create", 00:06:23.786 "blobfs_create", 00:06:23.786 "blobfs_detect", 00:06:23.786 "blobfs_set_cache_size", 00:06:23.786 "bdev_xnvme_delete", 00:06:23.786 "bdev_xnvme_create", 00:06:23.786 "bdev_aio_delete", 00:06:23.786 "bdev_aio_rescan", 00:06:23.786 "bdev_aio_create", 00:06:23.786 "bdev_ftl_set_property", 00:06:23.786 "bdev_ftl_get_properties", 00:06:23.786 "bdev_ftl_get_stats", 00:06:23.786 "bdev_ftl_unmap", 00:06:23.786 "bdev_ftl_unload", 00:06:23.786 "bdev_ftl_delete", 00:06:23.786 "bdev_ftl_load", 00:06:23.786 "bdev_ftl_create", 00:06:23.786 "bdev_virtio_attach_controller", 00:06:23.786 "bdev_virtio_scsi_get_devices", 00:06:23.786 "bdev_virtio_detach_controller", 00:06:23.786 "bdev_virtio_blk_set_hotplug", 00:06:23.786 "bdev_iscsi_delete", 00:06:23.786 "bdev_iscsi_create", 00:06:23.786 "bdev_iscsi_set_options", 00:06:23.786 "accel_error_inject_error", 00:06:23.786 "ioat_scan_accel_module", 00:06:23.786 "dsa_scan_accel_module", 00:06:23.786 "iaa_scan_accel_module", 00:06:23.786 "keyring_file_remove_key", 00:06:23.786 "keyring_file_add_key", 00:06:23.786 "keyring_linux_set_options", 00:06:23.786 "fsdev_aio_delete", 00:06:23.786 "fsdev_aio_create", 00:06:23.786 "iscsi_get_histogram", 00:06:23.786 "iscsi_enable_histogram", 00:06:23.786 "iscsi_set_options", 00:06:23.786 "iscsi_get_auth_groups", 00:06:23.786 "iscsi_auth_group_remove_secret", 00:06:23.786 "iscsi_auth_group_add_secret", 00:06:23.786 "iscsi_delete_auth_group", 00:06:23.786 "iscsi_create_auth_group", 00:06:23.786 "iscsi_set_discovery_auth", 00:06:23.786 "iscsi_get_options", 00:06:23.786 "iscsi_target_node_request_logout", 00:06:23.786 "iscsi_target_node_set_redirect", 00:06:23.786 "iscsi_target_node_set_auth", 00:06:23.786 "iscsi_target_node_add_lun", 00:06:23.786 "iscsi_get_stats", 00:06:23.786 "iscsi_get_connections", 00:06:23.786 "iscsi_portal_group_set_auth", 00:06:23.786 "iscsi_start_portal_group", 00:06:23.786 "iscsi_delete_portal_group", 00:06:23.786 "iscsi_create_portal_group", 00:06:23.786 "iscsi_get_portal_groups", 00:06:23.786 "iscsi_delete_target_node", 00:06:23.786 "iscsi_target_node_remove_pg_ig_maps", 00:06:23.786 "iscsi_target_node_add_pg_ig_maps", 00:06:23.786 "iscsi_create_target_node", 00:06:23.786 "iscsi_get_target_nodes", 00:06:23.786 "iscsi_delete_initiator_group", 00:06:23.786 "iscsi_initiator_group_remove_initiators", 00:06:23.786 "iscsi_initiator_group_add_initiators", 00:06:23.786 "iscsi_create_initiator_group", 00:06:23.786 "iscsi_get_initiator_groups", 00:06:23.786 "nvmf_set_crdt", 00:06:23.786 "nvmf_set_config", 00:06:23.786 "nvmf_set_max_subsystems", 00:06:23.786 "nvmf_stop_mdns_prr", 00:06:23.786 "nvmf_publish_mdns_prr", 00:06:23.786 "nvmf_subsystem_get_listeners", 00:06:23.786 "nvmf_subsystem_get_qpairs", 00:06:23.786 "nvmf_subsystem_get_controllers", 00:06:23.786 "nvmf_get_stats", 00:06:23.786 "nvmf_get_transports", 00:06:23.786 "nvmf_create_transport", 00:06:23.786 "nvmf_get_targets", 00:06:23.786 "nvmf_delete_target", 00:06:23.786 "nvmf_create_target", 00:06:23.786 "nvmf_subsystem_allow_any_host", 00:06:23.786 "nvmf_subsystem_set_keys", 00:06:23.786 "nvmf_subsystem_remove_host", 00:06:23.786 "nvmf_subsystem_add_host", 00:06:23.786 "nvmf_ns_remove_host", 00:06:23.786 "nvmf_ns_add_host", 00:06:23.786 "nvmf_subsystem_remove_ns", 00:06:23.786 "nvmf_subsystem_set_ns_ana_group", 00:06:23.786 "nvmf_subsystem_add_ns", 00:06:23.786 "nvmf_subsystem_listener_set_ana_state", 00:06:23.786 "nvmf_discovery_get_referrals", 00:06:23.786 "nvmf_discovery_remove_referral", 00:06:23.786 "nvmf_discovery_add_referral", 00:06:23.786 "nvmf_subsystem_remove_listener", 00:06:23.786 "nvmf_subsystem_add_listener", 00:06:23.786 "nvmf_delete_subsystem", 00:06:23.786 "nvmf_create_subsystem", 00:06:23.786 "nvmf_get_subsystems", 00:06:23.786 "env_dpdk_get_mem_stats", 00:06:23.786 "nbd_get_disks", 00:06:23.786 "nbd_stop_disk", 00:06:23.786 "nbd_start_disk", 00:06:23.786 "ublk_recover_disk", 00:06:23.786 "ublk_get_disks", 00:06:23.786 "ublk_stop_disk", 00:06:23.786 "ublk_start_disk", 00:06:23.786 "ublk_destroy_target", 00:06:23.786 "ublk_create_target", 00:06:23.786 "virtio_blk_create_transport", 00:06:23.786 "virtio_blk_get_transports", 00:06:23.786 "vhost_controller_set_coalescing", 00:06:23.786 "vhost_get_controllers", 00:06:23.786 "vhost_delete_controller", 00:06:23.786 "vhost_create_blk_controller", 00:06:23.786 "vhost_scsi_controller_remove_target", 00:06:23.786 "vhost_scsi_controller_add_target", 00:06:23.786 "vhost_start_scsi_controller", 00:06:23.786 "vhost_create_scsi_controller", 00:06:23.786 "thread_set_cpumask", 00:06:23.786 "scheduler_set_options", 00:06:23.786 "framework_get_governor", 00:06:23.786 "framework_get_scheduler", 00:06:23.786 "framework_set_scheduler", 00:06:23.786 "framework_get_reactors", 00:06:23.786 "thread_get_io_channels", 00:06:23.786 "thread_get_pollers", 00:06:23.786 "thread_get_stats", 00:06:23.786 "framework_monitor_context_switch", 00:06:23.786 "spdk_kill_instance", 00:06:23.786 "log_enable_timestamps", 00:06:23.786 "log_get_flags", 00:06:23.786 "log_clear_flag", 00:06:23.786 "log_set_flag", 00:06:23.786 "log_get_level", 00:06:23.786 "log_set_level", 00:06:23.786 "log_get_print_level", 00:06:23.786 "log_set_print_level", 00:06:23.786 "framework_enable_cpumask_locks", 00:06:23.786 "framework_disable_cpumask_locks", 00:06:23.786 "framework_wait_init", 00:06:23.786 "framework_start_init", 00:06:23.786 "scsi_get_devices", 00:06:23.786 "bdev_get_histogram", 00:06:23.786 "bdev_enable_histogram", 00:06:23.786 "bdev_set_qos_limit", 00:06:23.786 "bdev_set_qd_sampling_period", 00:06:23.786 "bdev_get_bdevs", 00:06:23.786 "bdev_reset_iostat", 00:06:23.786 "bdev_get_iostat", 00:06:23.786 "bdev_examine", 00:06:23.786 "bdev_wait_for_examine", 00:06:23.786 "bdev_set_options", 00:06:23.786 "accel_get_stats", 00:06:23.786 "accel_set_options", 00:06:23.786 "accel_set_driver", 00:06:23.786 "accel_crypto_key_destroy", 00:06:23.786 "accel_crypto_keys_get", 00:06:23.786 "accel_crypto_key_create", 00:06:23.786 "accel_assign_opc", 00:06:23.786 "accel_get_module_info", 00:06:23.786 "accel_get_opc_assignments", 00:06:23.786 "vmd_rescan", 00:06:23.786 "vmd_remove_device", 00:06:23.786 "vmd_enable", 00:06:23.786 "sock_get_default_impl", 00:06:23.786 "sock_set_default_impl", 00:06:23.786 "sock_impl_set_options", 00:06:23.786 "sock_impl_get_options", 00:06:23.786 "iobuf_get_stats", 00:06:23.786 "iobuf_set_options", 00:06:23.786 "keyring_get_keys", 00:06:23.786 "framework_get_pci_devices", 00:06:23.786 "framework_get_config", 00:06:23.786 "framework_get_subsystems", 00:06:23.786 "fsdev_set_opts", 00:06:23.786 "fsdev_get_opts", 00:06:23.786 "trace_get_info", 00:06:23.786 "trace_get_tpoint_group_mask", 00:06:23.786 "trace_disable_tpoint_group", 00:06:23.786 "trace_enable_tpoint_group", 00:06:23.786 "trace_clear_tpoint_mask", 00:06:23.786 "trace_set_tpoint_mask", 00:06:23.786 "notify_get_notifications", 00:06:23.786 "notify_get_types", 00:06:23.786 "spdk_get_version", 00:06:23.786 "rpc_get_methods" 00:06:23.786 ] 00:06:23.786 03:51:06 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:23.786 03:51:06 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:23.786 03:51:06 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.050 03:51:06 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.050 03:51:06 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58707 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58707 ']' 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58707 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58707 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.050 killing process with pid 58707 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58707' 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58707 00:06:24.050 03:51:06 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58707 00:06:26.588 00:06:26.589 real 0m4.234s 00:06:26.589 user 0m7.517s 00:06:26.589 sys 0m0.666s 00:06:26.589 03:51:09 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.589 ************************************ 00:06:26.589 END TEST spdkcli_tcp 00:06:26.589 03:51:09 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.589 ************************************ 00:06:26.589 03:51:09 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.589 03:51:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.589 03:51:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.589 03:51:09 -- common/autotest_common.sh@10 -- # set +x 00:06:26.589 ************************************ 00:06:26.589 START TEST dpdk_mem_utility 00:06:26.589 ************************************ 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:26.589 * Looking for test storage... 00:06:26.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.589 03:51:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.589 --rc genhtml_branch_coverage=1 00:06:26.589 --rc genhtml_function_coverage=1 00:06:26.589 --rc genhtml_legend=1 00:06:26.589 --rc geninfo_all_blocks=1 00:06:26.589 --rc geninfo_unexecuted_blocks=1 00:06:26.589 00:06:26.589 ' 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.589 --rc genhtml_branch_coverage=1 00:06:26.589 --rc genhtml_function_coverage=1 00:06:26.589 --rc genhtml_legend=1 00:06:26.589 --rc geninfo_all_blocks=1 00:06:26.589 --rc geninfo_unexecuted_blocks=1 00:06:26.589 00:06:26.589 ' 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.589 --rc genhtml_branch_coverage=1 00:06:26.589 --rc genhtml_function_coverage=1 00:06:26.589 --rc genhtml_legend=1 00:06:26.589 --rc geninfo_all_blocks=1 00:06:26.589 --rc geninfo_unexecuted_blocks=1 00:06:26.589 00:06:26.589 ' 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.589 --rc genhtml_branch_coverage=1 00:06:26.589 --rc genhtml_function_coverage=1 00:06:26.589 --rc genhtml_legend=1 00:06:26.589 --rc geninfo_all_blocks=1 00:06:26.589 --rc geninfo_unexecuted_blocks=1 00:06:26.589 00:06:26.589 ' 00:06:26.589 03:51:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:26.589 03:51:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58830 00:06:26.589 03:51:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.589 03:51:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58830 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58830 ']' 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.589 03:51:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.849 [2024-12-07 03:51:09.411831] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:26.849 [2024-12-07 03:51:09.411971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58830 ] 00:06:27.108 [2024-12-07 03:51:09.595134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.108 [2024-12-07 03:51:09.711168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.047 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:28.047 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:28.047 03:51:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:28.047 03:51:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:28.047 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.047 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:28.047 { 00:06:28.047 "filename": "/tmp/spdk_mem_dump.txt" 00:06:28.047 } 00:06:28.047 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.047 03:51:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.047 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:28.047 1 heaps totaling size 824.000000 MiB 00:06:28.047 size: 824.000000 MiB heap id: 0 00:06:28.047 end heaps---------- 00:06:28.047 9 mempools totaling size 603.782043 MiB 00:06:28.047 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:28.047 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:28.047 size: 100.555481 MiB name: bdev_io_58830 00:06:28.047 size: 50.003479 MiB name: msgpool_58830 00:06:28.047 size: 36.509338 MiB name: fsdev_io_58830 00:06:28.047 size: 21.763794 MiB name: PDU_Pool 00:06:28.047 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:28.047 size: 4.133484 MiB name: evtpool_58830 00:06:28.047 size: 0.026123 MiB name: Session_Pool 00:06:28.047 end mempools------- 00:06:28.047 6 memzones totaling size 4.142822 MiB 00:06:28.047 size: 1.000366 MiB name: RG_ring_0_58830 00:06:28.047 size: 1.000366 MiB name: RG_ring_1_58830 00:06:28.047 size: 1.000366 MiB name: RG_ring_4_58830 00:06:28.047 size: 1.000366 MiB name: RG_ring_5_58830 00:06:28.047 size: 0.125366 MiB name: RG_ring_2_58830 00:06:28.047 size: 0.015991 MiB name: RG_ring_3_58830 00:06:28.047 end memzones------- 00:06:28.047 03:51:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:28.047 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:06:28.047 list of free elements. size: 16.779663 MiB 00:06:28.047 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:28.047 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:28.047 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:28.047 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:28.047 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:28.047 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:28.047 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:28.047 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:28.047 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:28.047 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:28.047 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:28.047 element at address: 0x20001b400000 with size: 0.561218 MiB 00:06:28.047 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:28.047 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:28.047 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:28.047 element at address: 0x200012c00000 with size: 0.433228 MiB 00:06:28.047 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:28.047 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:28.047 list of standard malloc elements. size: 199.289429 MiB 00:06:28.047 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:28.047 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:28.047 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:28.047 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:28.047 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:28.047 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:28.047 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:28.047 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:28.047 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:28.047 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:28.047 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:28.047 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:28.047 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:28.047 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:28.048 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:28.049 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:28.049 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:28.049 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:28.049 list of memzone associated elements. size: 607.930908 MiB 00:06:28.049 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:28.049 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:28.049 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:28.049 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:28.049 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:28.049 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58830_0 00:06:28.049 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:28.049 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58830_0 00:06:28.049 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:28.049 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58830_0 00:06:28.049 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:28.049 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:28.049 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:28.049 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:28.049 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:28.049 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58830_0 00:06:28.049 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:28.049 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58830 00:06:28.049 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:28.049 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58830 00:06:28.049 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:28.049 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:28.049 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:28.049 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:28.049 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:28.049 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:28.049 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:28.049 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:28.049 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:28.049 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58830 00:06:28.049 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:28.049 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58830 00:06:28.049 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:28.049 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58830 00:06:28.049 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:28.049 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58830 00:06:28.049 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:28.049 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58830 00:06:28.049 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:28.049 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58830 00:06:28.049 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:28.049 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:28.049 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:28.049 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:28.049 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:28.049 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:28.049 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:28.049 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58830 00:06:28.049 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:28.049 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58830 00:06:28.049 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:28.049 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:28.049 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:28.049 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:28.049 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:28.049 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58830 00:06:28.050 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:28.050 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:28.050 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:28.050 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58830 00:06:28.050 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:28.050 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58830 00:06:28.050 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:28.050 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58830 00:06:28.050 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:28.050 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:28.050 03:51:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:28.050 03:51:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58830 00:06:28.050 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58830 ']' 00:06:28.050 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58830 00:06:28.050 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:28.050 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.050 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58830 00:06:28.309 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:28.309 killing process with pid 58830 00:06:28.309 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:28.309 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58830' 00:06:28.309 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58830 00:06:28.309 03:51:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58830 00:06:30.843 00:06:30.843 real 0m4.118s 00:06:30.843 user 0m4.039s 00:06:30.843 sys 0m0.593s 00:06:30.843 03:51:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.843 03:51:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 END TEST dpdk_mem_utility 00:06:30.843 ************************************ 00:06:30.843 03:51:13 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:30.843 03:51:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.843 03:51:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.843 03:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 START TEST event 00:06:30.843 ************************************ 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:30.843 * Looking for test storage... 00:06:30.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1711 -- # lcov --version 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:30.843 03:51:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.843 03:51:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.843 03:51:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.843 03:51:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.843 03:51:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.843 03:51:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.843 03:51:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.843 03:51:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.843 03:51:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.843 03:51:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.843 03:51:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.843 03:51:13 event -- scripts/common.sh@344 -- # case "$op" in 00:06:30.843 03:51:13 event -- scripts/common.sh@345 -- # : 1 00:06:30.843 03:51:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.843 03:51:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.843 03:51:13 event -- scripts/common.sh@365 -- # decimal 1 00:06:30.843 03:51:13 event -- scripts/common.sh@353 -- # local d=1 00:06:30.843 03:51:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.843 03:51:13 event -- scripts/common.sh@355 -- # echo 1 00:06:30.843 03:51:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.843 03:51:13 event -- scripts/common.sh@366 -- # decimal 2 00:06:30.843 03:51:13 event -- scripts/common.sh@353 -- # local d=2 00:06:30.843 03:51:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.843 03:51:13 event -- scripts/common.sh@355 -- # echo 2 00:06:30.843 03:51:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.843 03:51:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.843 03:51:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.843 03:51:13 event -- scripts/common.sh@368 -- # return 0 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.843 --rc genhtml_branch_coverage=1 00:06:30.843 --rc genhtml_function_coverage=1 00:06:30.843 --rc genhtml_legend=1 00:06:30.843 --rc geninfo_all_blocks=1 00:06:30.843 --rc geninfo_unexecuted_blocks=1 00:06:30.843 00:06:30.843 ' 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.843 --rc genhtml_branch_coverage=1 00:06:30.843 --rc genhtml_function_coverage=1 00:06:30.843 --rc genhtml_legend=1 00:06:30.843 --rc geninfo_all_blocks=1 00:06:30.843 --rc geninfo_unexecuted_blocks=1 00:06:30.843 00:06:30.843 ' 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.843 --rc genhtml_branch_coverage=1 00:06:30.843 --rc genhtml_function_coverage=1 00:06:30.843 --rc genhtml_legend=1 00:06:30.843 --rc geninfo_all_blocks=1 00:06:30.843 --rc geninfo_unexecuted_blocks=1 00:06:30.843 00:06:30.843 ' 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:30.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.843 --rc genhtml_branch_coverage=1 00:06:30.843 --rc genhtml_function_coverage=1 00:06:30.843 --rc genhtml_legend=1 00:06:30.843 --rc geninfo_all_blocks=1 00:06:30.843 --rc geninfo_unexecuted_blocks=1 00:06:30.843 00:06:30.843 ' 00:06:30.843 03:51:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:30.843 03:51:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.843 03:51:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:30.843 03:51:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.843 03:51:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.843 ************************************ 00:06:30.843 START TEST event_perf 00:06:30.843 ************************************ 00:06:30.843 03:51:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.843 Running I/O for 1 seconds...[2024-12-07 03:51:13.561234] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:30.843 [2024-12-07 03:51:13.561343] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58938 ] 00:06:31.102 [2024-12-07 03:51:13.744798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:31.361 [2024-12-07 03:51:13.868849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.361 [2024-12-07 03:51:13.869035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.361 [2024-12-07 03:51:13.869171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.361 [2024-12-07 03:51:13.869180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.739 Running I/O for 1 seconds... 00:06:32.739 lcore 0: 196642 00:06:32.739 lcore 1: 196643 00:06:32.739 lcore 2: 196643 00:06:32.739 lcore 3: 196645 00:06:32.739 done. 00:06:32.739 00:06:32.739 real 0m1.608s 00:06:32.739 user 0m4.367s 00:06:32.739 sys 0m0.120s 00:06:32.739 03:51:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:32.739 ************************************ 00:06:32.739 END TEST event_perf 00:06:32.739 ************************************ 00:06:32.739 03:51:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.739 03:51:15 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:32.739 03:51:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:32.739 03:51:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:32.739 03:51:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.739 ************************************ 00:06:32.739 START TEST event_reactor 00:06:32.739 ************************************ 00:06:32.739 03:51:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:32.739 [2024-12-07 03:51:15.244455] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:32.739 [2024-12-07 03:51:15.244586] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58977 ] 00:06:32.739 [2024-12-07 03:51:15.422791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.996 [2024-12-07 03:51:15.530101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.370 test_start 00:06:34.370 oneshot 00:06:34.370 tick 100 00:06:34.370 tick 100 00:06:34.370 tick 250 00:06:34.370 tick 100 00:06:34.370 tick 100 00:06:34.370 tick 100 00:06:34.370 tick 250 00:06:34.370 tick 500 00:06:34.370 tick 100 00:06:34.370 tick 100 00:06:34.370 tick 250 00:06:34.370 tick 100 00:06:34.370 tick 100 00:06:34.370 test_end 00:06:34.370 00:06:34.370 real 0m1.564s 00:06:34.370 user 0m1.347s 00:06:34.371 sys 0m0.108s 00:06:34.371 03:51:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.371 ************************************ 00:06:34.371 END TEST event_reactor 00:06:34.371 ************************************ 00:06:34.371 03:51:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:34.371 03:51:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.371 03:51:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:34.371 03:51:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.371 03:51:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.371 ************************************ 00:06:34.371 START TEST event_reactor_perf 00:06:34.371 ************************************ 00:06:34.371 03:51:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:34.371 [2024-12-07 03:51:16.880360] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:34.371 [2024-12-07 03:51:16.880468] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59014 ] 00:06:34.371 [2024-12-07 03:51:17.062241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.629 [2024-12-07 03:51:17.173763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.004 test_start 00:06:36.004 test_end 00:06:36.004 Performance: 379570 events per second 00:06:36.004 00:06:36.004 real 0m1.568s 00:06:36.004 user 0m1.352s 00:06:36.004 sys 0m0.107s 00:06:36.004 ************************************ 00:06:36.004 03:51:18 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.004 03:51:18 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.004 END TEST event_reactor_perf 00:06:36.004 ************************************ 00:06:36.004 03:51:18 event -- event/event.sh@49 -- # uname -s 00:06:36.004 03:51:18 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:36.004 03:51:18 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:36.004 03:51:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:36.004 03:51:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.004 03:51:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.004 ************************************ 00:06:36.004 START TEST event_scheduler 00:06:36.004 ************************************ 00:06:36.004 03:51:18 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:36.004 * Looking for test storage... 00:06:36.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:36.004 03:51:18 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:36.004 03:51:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:06:36.004 03:51:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:36.004 03:51:18 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.004 03:51:18 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:36.005 03:51:18 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:36.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.005 --rc genhtml_branch_coverage=1 00:06:36.005 --rc genhtml_function_coverage=1 00:06:36.005 --rc genhtml_legend=1 00:06:36.005 --rc geninfo_all_blocks=1 00:06:36.005 --rc geninfo_unexecuted_blocks=1 00:06:36.005 00:06:36.005 ' 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:36.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.005 --rc genhtml_branch_coverage=1 00:06:36.005 --rc genhtml_function_coverage=1 00:06:36.005 --rc genhtml_legend=1 00:06:36.005 --rc geninfo_all_blocks=1 00:06:36.005 --rc geninfo_unexecuted_blocks=1 00:06:36.005 00:06:36.005 ' 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:36.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.005 --rc genhtml_branch_coverage=1 00:06:36.005 --rc genhtml_function_coverage=1 00:06:36.005 --rc genhtml_legend=1 00:06:36.005 --rc geninfo_all_blocks=1 00:06:36.005 --rc geninfo_unexecuted_blocks=1 00:06:36.005 00:06:36.005 ' 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:36.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.005 --rc genhtml_branch_coverage=1 00:06:36.005 --rc genhtml_function_coverage=1 00:06:36.005 --rc genhtml_legend=1 00:06:36.005 --rc geninfo_all_blocks=1 00:06:36.005 --rc geninfo_unexecuted_blocks=1 00:06:36.005 00:06:36.005 ' 00:06:36.005 03:51:18 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:36.005 03:51:18 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59090 00:06:36.005 03:51:18 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:36.005 03:51:18 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.005 03:51:18 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59090 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59090 ']' 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:36.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:36.005 03:51:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:36.263 [2024-12-07 03:51:18.802712] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:36.264 [2024-12-07 03:51:18.802844] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59090 ] 00:06:36.264 [2024-12-07 03:51:18.984464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:36.521 [2024-12-07 03:51:19.102794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.521 [2024-12-07 03:51:19.103043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.521 [2024-12-07 03:51:19.103094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.521 [2024-12-07 03:51:19.103140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:37.089 03:51:19 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:37.089 POWER: Cannot set governor of lcore 0 to userspace 00:06:37.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:37.089 POWER: Cannot set governor of lcore 0 to performance 00:06:37.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:37.089 POWER: Cannot set governor of lcore 0 to userspace 00:06:37.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:37.089 POWER: Cannot set governor of lcore 0 to userspace 00:06:37.089 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:37.089 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:37.089 POWER: Unable to set Power Management Environment for lcore 0 00:06:37.089 [2024-12-07 03:51:19.636204] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:37.089 [2024-12-07 03:51:19.636231] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:37.089 [2024-12-07 03:51:19.636244] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:37.089 [2024-12-07 03:51:19.636263] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:37.089 [2024-12-07 03:51:19.636274] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:37.089 [2024-12-07 03:51:19.636286] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.089 03:51:19 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.089 03:51:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 [2024-12-07 03:51:19.952171] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:37.348 03:51:19 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:19 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:37.348 03:51:19 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.348 03:51:19 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.348 03:51:19 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 ************************************ 00:06:37.348 START TEST scheduler_create_thread 00:06:37.348 ************************************ 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 2 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 3 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 4 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 5 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 6 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 7 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 8 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.348 9 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.348 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:37.607 10 00:06:37.607 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.607 03:51:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:37.607 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.607 03:51:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.999 03:51:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.999 03:51:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:38.999 03:51:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:38.999 03:51:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.999 03:51:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.566 03:51:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.566 03:51:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:39.566 03:51:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.566 03:51:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.502 03:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.502 03:51:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:40.502 03:51:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:40.502 03:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.502 03:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.436 03:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.436 00:06:41.436 real 0m3.885s 00:06:41.436 user 0m0.020s 00:06:41.436 sys 0m0.014s 00:06:41.436 03:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:41.436 03:51:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.436 ************************************ 00:06:41.436 END TEST scheduler_create_thread 00:06:41.436 ************************************ 00:06:41.436 03:51:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:41.436 03:51:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59090 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59090 ']' 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59090 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59090 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59090' 00:06:41.436 killing process with pid 59090 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59090 00:06:41.436 03:51:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59090 00:06:41.695 [2024-12-07 03:51:24.233992] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:43.096 00:06:43.096 real 0m6.922s 00:06:43.096 user 0m14.271s 00:06:43.096 sys 0m0.531s 00:06:43.096 03:51:25 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.096 03:51:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:43.096 ************************************ 00:06:43.096 END TEST event_scheduler 00:06:43.096 ************************************ 00:06:43.096 03:51:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:43.096 03:51:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:43.096 03:51:25 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.096 03:51:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.096 03:51:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:43.096 ************************************ 00:06:43.096 START TEST app_repeat 00:06:43.096 ************************************ 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59212 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.096 Process app_repeat pid: 59212 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59212' 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:43.096 spdk_app_start Round 0 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:43.096 03:51:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.096 03:51:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:43.096 [2024-12-07 03:51:25.551053] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:06:43.096 [2024-12-07 03:51:25.551182] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59212 ] 00:06:43.096 [2024-12-07 03:51:25.734129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.355 [2024-12-07 03:51:25.848023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.355 [2024-12-07 03:51:25.848058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.922 03:51:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.922 03:51:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:43.922 03:51:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.181 Malloc0 00:06:44.181 03:51:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.441 Malloc1 00:06:44.441 03:51:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.441 03:51:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:44.701 /dev/nbd0 00:06:44.701 03:51:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:44.701 03:51:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.701 1+0 records in 00:06:44.701 1+0 records out 00:06:44.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374701 s, 10.9 MB/s 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.701 03:51:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:44.701 03:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.701 03:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.701 03:51:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:44.701 /dev/nbd1 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:44.961 1+0 records in 00:06:44.961 1+0 records out 00:06:44.961 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405543 s, 10.1 MB/s 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:44.961 03:51:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:44.961 { 00:06:44.961 "nbd_device": "/dev/nbd0", 00:06:44.961 "bdev_name": "Malloc0" 00:06:44.961 }, 00:06:44.961 { 00:06:44.961 "nbd_device": "/dev/nbd1", 00:06:44.961 "bdev_name": "Malloc1" 00:06:44.961 } 00:06:44.961 ]' 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:44.961 { 00:06:44.961 "nbd_device": "/dev/nbd0", 00:06:44.961 "bdev_name": "Malloc0" 00:06:44.961 }, 00:06:44.961 { 00:06:44.961 "nbd_device": "/dev/nbd1", 00:06:44.961 "bdev_name": "Malloc1" 00:06:44.961 } 00:06:44.961 ]' 00:06:44.961 03:51:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.220 /dev/nbd1' 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.220 /dev/nbd1' 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.220 03:51:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.221 256+0 records in 00:06:45.221 256+0 records out 00:06:45.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114598 s, 91.5 MB/s 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.221 256+0 records in 00:06:45.221 256+0 records out 00:06:45.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255362 s, 41.1 MB/s 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.221 256+0 records in 00:06:45.221 256+0 records out 00:06:45.221 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0336708 s, 31.1 MB/s 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.221 03:51:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.481 03:51:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.740 03:51:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:45.999 03:51:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:45.999 03:51:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.259 03:51:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.637 [2024-12-07 03:51:30.117946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.637 [2024-12-07 03:51:30.220894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.637 [2024-12-07 03:51:30.220894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.896 [2024-12-07 03:51:30.410609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.896 [2024-12-07 03:51:30.410697] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:49.273 03:51:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:49.273 spdk_app_start Round 1 00:06:49.273 03:51:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:49.273 03:51:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:06:49.273 03:51:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:06:49.273 03:51:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:49.273 03:51:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:49.273 03:51:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:49.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:49.273 03:51:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:49.273 03:51:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:49.533 03:51:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.533 03:51:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:49.533 03:51:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:49.792 Malloc0 00:06:49.792 03:51:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.051 Malloc1 00:06:50.051 03:51:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.051 03:51:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.310 /dev/nbd0 00:06:50.310 03:51:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.310 03:51:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.310 1+0 records in 00:06:50.310 1+0 records out 00:06:50.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466617 s, 8.8 MB/s 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.310 03:51:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.310 03:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.310 03:51:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.311 03:51:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.570 /dev/nbd1 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.570 1+0 records in 00:06:50.570 1+0 records out 00:06:50.570 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000474263 s, 8.6 MB/s 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:50.570 03:51:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.570 03:51:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.829 { 00:06:50.829 "nbd_device": "/dev/nbd0", 00:06:50.829 "bdev_name": "Malloc0" 00:06:50.829 }, 00:06:50.829 { 00:06:50.829 "nbd_device": "/dev/nbd1", 00:06:50.829 "bdev_name": "Malloc1" 00:06:50.829 } 00:06:50.829 ]' 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.829 { 00:06:50.829 "nbd_device": "/dev/nbd0", 00:06:50.829 "bdev_name": "Malloc0" 00:06:50.829 }, 00:06:50.829 { 00:06:50.829 "nbd_device": "/dev/nbd1", 00:06:50.829 "bdev_name": "Malloc1" 00:06:50.829 } 00:06:50.829 ]' 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.829 /dev/nbd1' 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.829 /dev/nbd1' 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:50.829 03:51:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:50.830 256+0 records in 00:06:50.830 256+0 records out 00:06:50.830 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0111859 s, 93.7 MB/s 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.830 03:51:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.089 256+0 records in 00:06:51.089 256+0 records out 00:06:51.089 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0296214 s, 35.4 MB/s 00:06:51.089 03:51:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.089 03:51:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.089 256+0 records in 00:06:51.090 256+0 records out 00:06:51.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0366372 s, 28.6 MB/s 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.090 03:51:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.349 03:51:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:51.608 03:51:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:51.868 03:51:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:51.868 03:51:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.127 03:51:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:53.506 [2024-12-07 03:51:35.921122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:53.506 [2024-12-07 03:51:36.030581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.506 [2024-12-07 03:51:36.030599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.506 [2024-12-07 03:51:36.228745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.506 [2024-12-07 03:51:36.228841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:55.408 03:51:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:55.408 spdk_app_start Round 2 00:06:55.408 03:51:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:55.408 03:51:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:06:55.408 03:51:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:06:55.408 03:51:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:55.408 03:51:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:55.408 03:51:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:55.408 03:51:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.408 03:51:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:55.408 03:51:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.408 03:51:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:55.408 03:51:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.666 Malloc0 00:06:55.666 03:51:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:55.924 Malloc1 00:06:55.924 03:51:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:55.924 03:51:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:55.925 03:51:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:55.925 03:51:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:55.925 03:51:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:55.925 03:51:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.182 /dev/nbd0 00:06:56.182 03:51:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.182 03:51:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.182 1+0 records in 00:06:56.182 1+0 records out 00:06:56.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000271192 s, 15.1 MB/s 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.182 03:51:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.182 03:51:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.182 03:51:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.182 03:51:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.440 /dev/nbd1 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.440 1+0 records in 00:06:56.440 1+0 records out 00:06:56.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00047308 s, 8.7 MB/s 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:56.440 03:51:39 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.440 03:51:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:56.699 { 00:06:56.699 "nbd_device": "/dev/nbd0", 00:06:56.699 "bdev_name": "Malloc0" 00:06:56.699 }, 00:06:56.699 { 00:06:56.699 "nbd_device": "/dev/nbd1", 00:06:56.699 "bdev_name": "Malloc1" 00:06:56.699 } 00:06:56.699 ]' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:56.699 { 00:06:56.699 "nbd_device": "/dev/nbd0", 00:06:56.699 "bdev_name": "Malloc0" 00:06:56.699 }, 00:06:56.699 { 00:06:56.699 "nbd_device": "/dev/nbd1", 00:06:56.699 "bdev_name": "Malloc1" 00:06:56.699 } 00:06:56.699 ]' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:56.699 /dev/nbd1' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:56.699 /dev/nbd1' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:56.699 256+0 records in 00:06:56.699 256+0 records out 00:06:56.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115471 s, 90.8 MB/s 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:56.699 256+0 records in 00:06:56.699 256+0 records out 00:06:56.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03248 s, 32.3 MB/s 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:56.699 256+0 records in 00:06:56.699 256+0 records out 00:06:56.699 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0369137 s, 28.4 MB/s 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.699 03:51:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:56.957 03:51:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.216 03:51:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.475 03:51:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.475 03:51:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.042 03:51:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.420 [2024-12-07 03:51:41.744724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.420 [2024-12-07 03:51:41.860477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.420 [2024-12-07 03:51:41.860479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.420 [2024-12-07 03:51:42.058298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.420 [2024-12-07 03:51:42.058404] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.323 03:51:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59212 /var/tmp/spdk-nbd.sock 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59212 ']' 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:01.323 03:51:43 event.app_repeat -- event/event.sh@39 -- # killprocess 59212 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59212 ']' 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59212 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59212 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.323 killing process with pid 59212 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59212' 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59212 00:07:01.323 03:51:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59212 00:07:02.259 spdk_app_start is called in Round 0. 00:07:02.259 Shutdown signal received, stop current app iteration 00:07:02.259 Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 reinitialization... 00:07:02.259 spdk_app_start is called in Round 1. 00:07:02.259 Shutdown signal received, stop current app iteration 00:07:02.259 Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 reinitialization... 00:07:02.259 spdk_app_start is called in Round 2. 00:07:02.259 Shutdown signal received, stop current app iteration 00:07:02.259 Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 reinitialization... 00:07:02.259 spdk_app_start is called in Round 3. 00:07:02.259 Shutdown signal received, stop current app iteration 00:07:02.259 03:51:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:02.259 03:51:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:02.259 00:07:02.259 real 0m19.402s 00:07:02.259 user 0m41.366s 00:07:02.259 sys 0m2.989s 00:07:02.259 03:51:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.259 03:51:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:02.259 ************************************ 00:07:02.259 END TEST app_repeat 00:07:02.259 ************************************ 00:07:02.259 03:51:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:02.259 03:51:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:02.259 03:51:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.259 03:51:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.259 03:51:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.259 ************************************ 00:07:02.259 START TEST cpu_locks 00:07:02.259 ************************************ 00:07:02.259 03:51:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:02.520 * Looking for test storage... 00:07:02.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.520 03:51:45 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.520 --rc genhtml_branch_coverage=1 00:07:02.520 --rc genhtml_function_coverage=1 00:07:02.520 --rc genhtml_legend=1 00:07:02.520 --rc geninfo_all_blocks=1 00:07:02.520 --rc geninfo_unexecuted_blocks=1 00:07:02.520 00:07:02.520 ' 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.520 --rc genhtml_branch_coverage=1 00:07:02.520 --rc genhtml_function_coverage=1 00:07:02.520 --rc genhtml_legend=1 00:07:02.520 --rc geninfo_all_blocks=1 00:07:02.520 --rc geninfo_unexecuted_blocks=1 00:07:02.520 00:07:02.520 ' 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.520 --rc genhtml_branch_coverage=1 00:07:02.520 --rc genhtml_function_coverage=1 00:07:02.520 --rc genhtml_legend=1 00:07:02.520 --rc geninfo_all_blocks=1 00:07:02.520 --rc geninfo_unexecuted_blocks=1 00:07:02.520 00:07:02.520 ' 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:02.520 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.520 --rc genhtml_branch_coverage=1 00:07:02.520 --rc genhtml_function_coverage=1 00:07:02.520 --rc genhtml_legend=1 00:07:02.520 --rc geninfo_all_blocks=1 00:07:02.520 --rc geninfo_unexecuted_blocks=1 00:07:02.520 00:07:02.520 ' 00:07:02.520 03:51:45 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:02.520 03:51:45 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:02.520 03:51:45 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:02.520 03:51:45 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.520 03:51:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.520 ************************************ 00:07:02.520 START TEST default_locks 00:07:02.520 ************************************ 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59654 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59654 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59654 ']' 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.520 03:51:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.778 [2024-12-07 03:51:45.317231] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:02.778 [2024-12-07 03:51:45.317372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59654 ] 00:07:02.778 [2024-12-07 03:51:45.496291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.037 [2024-12-07 03:51:45.607446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.971 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.971 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:03.971 03:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59654 00:07:03.971 03:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59654 00:07:03.971 03:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59654 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59654 ']' 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59654 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59654 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59654' 00:07:04.230 killing process with pid 59654 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59654 00:07:04.230 03:51:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59654 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59654 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59654 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59654 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59654 ']' 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.766 ERROR: process (pid: 59654) is no longer running 00:07:06.766 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59654) - No such process 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:06.766 00:07:06.766 real 0m4.151s 00:07:06.766 user 0m4.106s 00:07:06.766 sys 0m0.653s 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.766 ************************************ 00:07:06.766 END TEST default_locks 00:07:06.766 ************************************ 00:07:06.766 03:51:49 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.766 03:51:49 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:06.766 03:51:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.766 03:51:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.766 03:51:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.766 ************************************ 00:07:06.766 START TEST default_locks_via_rpc 00:07:06.766 ************************************ 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59737 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59737 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59737 ']' 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.766 03:51:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.025 [2024-12-07 03:51:49.565958] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:07.025 [2024-12-07 03:51:49.566148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59737 ] 00:07:07.025 [2024-12-07 03:51:49.760396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.284 [2024-12-07 03:51:49.878030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.223 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.223 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:08.223 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:08.223 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59737 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59737 00:07:08.224 03:51:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59737 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59737 ']' 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59737 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59737 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.790 killing process with pid 59737 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59737' 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59737 00:07:08.790 03:51:51 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59737 00:07:11.323 00:07:11.323 real 0m4.318s 00:07:11.323 user 0m4.234s 00:07:11.323 sys 0m0.734s 00:07:11.323 03:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.324 03:51:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.324 ************************************ 00:07:11.324 END TEST default_locks_via_rpc 00:07:11.324 ************************************ 00:07:11.324 03:51:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:11.324 03:51:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.324 03:51:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.324 03:51:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.324 ************************************ 00:07:11.324 START TEST non_locking_app_on_locked_coremask 00:07:11.324 ************************************ 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59811 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59811 /var/tmp/spdk.sock 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59811 ']' 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.324 03:51:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.324 [2024-12-07 03:51:53.940189] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:11.324 [2024-12-07 03:51:53.940315] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59811 ] 00:07:11.583 [2024-12-07 03:51:54.123730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.583 [2024-12-07 03:51:54.241156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59832 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59832 /var/tmp/spdk2.sock 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59832 ']' 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:12.520 03:51:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.520 [2024-12-07 03:51:55.229520] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:12.520 [2024-12-07 03:51:55.229648] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59832 ] 00:07:12.779 [2024-12-07 03:51:55.416548] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:12.779 [2024-12-07 03:51:55.416623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.038 [2024-12-07 03:51:55.631682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.569 03:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.569 03:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:15.569 03:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59811 00:07:15.569 03:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59811 00:07:15.569 03:51:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59811 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59811 ']' 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59811 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59811 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.136 killing process with pid 59811 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59811' 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59811 00:07:16.136 03:51:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59811 00:07:21.423 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59832 00:07:21.423 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59832 ']' 00:07:21.423 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59832 00:07:21.423 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59832 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.681 killing process with pid 59832 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59832' 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59832 00:07:21.681 03:52:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59832 00:07:24.279 00:07:24.279 real 0m13.067s 00:07:24.279 user 0m13.222s 00:07:24.279 sys 0m1.594s 00:07:24.279 03:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.279 03:52:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.279 ************************************ 00:07:24.279 END TEST non_locking_app_on_locked_coremask 00:07:24.279 ************************************ 00:07:24.279 03:52:06 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:24.279 03:52:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.279 03:52:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.279 03:52:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.279 ************************************ 00:07:24.279 START TEST locking_app_on_unlocked_coremask 00:07:24.279 ************************************ 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60000 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60000 /var/tmp/spdk.sock 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60000 ']' 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.279 03:52:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.538 [2024-12-07 03:52:07.079313] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:24.538 [2024-12-07 03:52:07.079969] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60000 ] 00:07:24.538 [2024-12-07 03:52:07.263219] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.538 [2024-12-07 03:52:07.263297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.796 [2024-12-07 03:52:07.407841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60016 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60016 /var/tmp/spdk2.sock 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60016 ']' 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.735 03:52:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:25.735 [2024-12-07 03:52:08.463606] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:25.735 [2024-12-07 03:52:08.463732] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60016 ] 00:07:25.995 [2024-12-07 03:52:08.646495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.255 [2024-12-07 03:52:08.879708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.792 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.792 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:28.792 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60016 00:07:28.792 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60016 00:07:28.792 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60000 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60000 ']' 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60000 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60000 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60000' 00:07:29.361 killing process with pid 60000 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60000 00:07:29.361 03:52:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60000 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60016 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60016 ']' 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60016 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60016 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.633 killing process with pid 60016 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60016' 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60016 00:07:34.633 03:52:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60016 00:07:36.533 00:07:36.533 real 0m12.170s 00:07:36.533 user 0m12.390s 00:07:36.533 sys 0m1.495s 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.533 ************************************ 00:07:36.533 END TEST locking_app_on_unlocked_coremask 00:07:36.533 ************************************ 00:07:36.533 03:52:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:36.533 03:52:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.533 03:52:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.533 03:52:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.533 ************************************ 00:07:36.533 START TEST locking_app_on_locked_coremask 00:07:36.533 ************************************ 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60164 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60164 /var/tmp/spdk.sock 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60164 ']' 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.533 03:52:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.791 [2024-12-07 03:52:19.324009] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:36.792 [2024-12-07 03:52:19.324153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60164 ] 00:07:36.792 [2024-12-07 03:52:19.506359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.051 [2024-12-07 03:52:19.621596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60191 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60191 /var/tmp/spdk2.sock 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60191 /var/tmp/spdk2.sock 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60191 /var/tmp/spdk2.sock 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60191 ']' 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.015 03:52:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.015 [2024-12-07 03:52:20.588915] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:38.015 [2024-12-07 03:52:20.589284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60191 ] 00:07:38.289 [2024-12-07 03:52:20.772495] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60164 has claimed it. 00:07:38.289 [2024-12-07 03:52:20.772560] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.548 ERROR: process (pid: 60191) is no longer running 00:07:38.548 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60191) - No such process 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60164 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60164 00:07:38.548 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60164 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60164 ']' 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60164 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60164 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.117 killing process with pid 60164 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60164' 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60164 00:07:39.117 03:52:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60164 00:07:41.657 ************************************ 00:07:41.657 END TEST locking_app_on_locked_coremask 00:07:41.657 ************************************ 00:07:41.657 00:07:41.657 real 0m4.904s 00:07:41.657 user 0m5.061s 00:07:41.657 sys 0m0.885s 00:07:41.657 03:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:41.657 03:52:24 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.657 03:52:24 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:41.657 03:52:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:41.657 03:52:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.657 03:52:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:41.657 ************************************ 00:07:41.657 START TEST locking_overlapped_coremask 00:07:41.657 ************************************ 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60255 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60255 /var/tmp/spdk.sock 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60255 ']' 00:07:41.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.657 03:52:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.657 [2024-12-07 03:52:24.309893] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:41.657 [2024-12-07 03:52:24.310271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60255 ] 00:07:41.917 [2024-12-07 03:52:24.493473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.917 [2024-12-07 03:52:24.606328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.917 [2024-12-07 03:52:24.606468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.917 [2024-12-07 03:52:24.606502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60278 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60278 /var/tmp/spdk2.sock 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60278 /var/tmp/spdk2.sock 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60278 /var/tmp/spdk2.sock 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60278 ']' 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:42.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.856 03:52:25 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.856 [2024-12-07 03:52:25.580855] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:42.856 [2024-12-07 03:52:25.581389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60278 ] 00:07:43.114 [2024-12-07 03:52:25.771141] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60255 has claimed it. 00:07:43.114 [2024-12-07 03:52:25.771361] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:43.682 ERROR: process (pid: 60278) is no longer running 00:07:43.682 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60278) - No such process 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60255 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60255 ']' 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60255 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60255 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60255' 00:07:43.682 killing process with pid 60255 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60255 00:07:43.682 03:52:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60255 00:07:46.219 00:07:46.219 real 0m4.488s 00:07:46.219 user 0m12.084s 00:07:46.219 sys 0m0.654s 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.219 ************************************ 00:07:46.219 END TEST locking_overlapped_coremask 00:07:46.219 ************************************ 00:07:46.219 03:52:28 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:46.219 03:52:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:46.219 03:52:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.219 03:52:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:46.219 ************************************ 00:07:46.219 START TEST locking_overlapped_coremask_via_rpc 00:07:46.219 ************************************ 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60343 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60343 /var/tmp/spdk.sock 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:46.219 03:52:28 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:46.219 [2024-12-07 03:52:28.874461] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:46.219 [2024-12-07 03:52:28.874801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:07:46.477 [2024-12-07 03:52:29.057851] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:46.477 [2024-12-07 03:52:29.057900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.477 [2024-12-07 03:52:29.182269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.477 [2024-12-07 03:52:29.182410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.477 [2024-12-07 03:52:29.182450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60366 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60366 /var/tmp/spdk2.sock 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:47.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.412 03:52:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:47.670 [2024-12-07 03:52:30.149032] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:47.670 [2024-12-07 03:52:30.149709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60366 ] 00:07:47.670 [2024-12-07 03:52:30.335760] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:47.670 [2024-12-07 03:52:30.335810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:47.928 [2024-12-07 03:52:30.580040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.928 [2024-12-07 03:52:30.583073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.928 [2024-12-07 03:52:30.583106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.454 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.454 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 [2024-12-07 03:52:32.722114] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60343 has claimed it. 00:07:50.455 request: 00:07:50.455 { 00:07:50.455 "method": "framework_enable_cpumask_locks", 00:07:50.455 "req_id": 1 00:07:50.455 } 00:07:50.455 Got JSON-RPC error response 00:07:50.455 response: 00:07:50.455 { 00:07:50.455 "code": -32603, 00:07:50.455 "message": "Failed to claim CPU core: 2" 00:07:50.455 } 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60343 /var/tmp/spdk.sock 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60343 ']' 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60366 /var/tmp/spdk2.sock 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60366 ']' 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.455 03:52:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:50.455 00:07:50.455 real 0m4.408s 00:07:50.455 user 0m1.230s 00:07:50.455 sys 0m0.231s 00:07:50.455 ************************************ 00:07:50.455 END TEST locking_overlapped_coremask_via_rpc 00:07:50.455 ************************************ 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.455 03:52:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.713 03:52:33 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:50.713 03:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60343 ]] 00:07:50.713 03:52:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60343 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60343 ']' 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60343 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60343 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:50.713 killing process with pid 60343 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60343' 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60343 00:07:50.713 03:52:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60343 00:07:53.278 03:52:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60366 ]] 00:07:53.278 03:52:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60366 00:07:53.278 03:52:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 00:07:53.278 03:52:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60366 00:07:53.278 03:52:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:53.278 03:52:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.279 03:52:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60366 00:07:53.557 killing process with pid 60366 00:07:53.557 03:52:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:53.557 03:52:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:53.557 03:52:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60366' 00:07:53.557 03:52:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60366 00:07:53.557 03:52:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60366 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60343 ]] 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60343 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60343 ']' 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60343 00:07:56.115 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60343) - No such process 00:07:56.115 Process with pid 60343 is not found 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60343 is not found' 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60366 ]] 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60366 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60366 ']' 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60366 00:07:56.115 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60366) - No such process 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60366 is not found' 00:07:56.115 Process with pid 60366 is not found 00:07:56.115 03:52:38 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:56.115 00:07:56.115 real 0m53.464s 00:07:56.115 user 1m29.790s 00:07:56.115 sys 0m7.580s 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.115 ************************************ 00:07:56.115 END TEST cpu_locks 00:07:56.115 ************************************ 00:07:56.115 03:52:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 ************************************ 00:07:56.115 END TEST event 00:07:56.115 ************************************ 00:07:56.115 00:07:56.115 real 1m25.224s 00:07:56.115 user 2m32.754s 00:07:56.115 sys 0m11.861s 00:07:56.115 03:52:38 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.115 03:52:38 event -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 03:52:38 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:56.115 03:52:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:56.115 03:52:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.115 03:52:38 -- common/autotest_common.sh@10 -- # set +x 00:07:56.115 ************************************ 00:07:56.115 START TEST thread 00:07:56.115 ************************************ 00:07:56.115 03:52:38 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:56.115 * Looking for test storage... 00:07:56.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:56.115 03:52:38 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.115 03:52:38 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.115 03:52:38 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.115 03:52:38 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.115 03:52:38 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.115 03:52:38 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.115 03:52:38 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.115 03:52:38 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.115 03:52:38 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.115 03:52:38 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.115 03:52:38 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.115 03:52:38 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.115 03:52:38 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.115 03:52:38 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.115 03:52:38 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.115 03:52:38 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:56.115 03:52:38 thread -- scripts/common.sh@345 -- # : 1 00:07:56.115 03:52:38 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.115 03:52:38 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.115 03:52:38 thread -- scripts/common.sh@365 -- # decimal 1 00:07:56.115 03:52:38 thread -- scripts/common.sh@353 -- # local d=1 00:07:56.115 03:52:38 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.115 03:52:38 thread -- scripts/common.sh@355 -- # echo 1 00:07:56.115 03:52:38 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.115 03:52:38 thread -- scripts/common.sh@366 -- # decimal 2 00:07:56.115 03:52:38 thread -- scripts/common.sh@353 -- # local d=2 00:07:56.115 03:52:38 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.115 03:52:38 thread -- scripts/common.sh@355 -- # echo 2 00:07:56.115 03:52:38 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.115 03:52:38 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.115 03:52:38 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.115 03:52:38 thread -- scripts/common.sh@368 -- # return 0 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.116 --rc genhtml_branch_coverage=1 00:07:56.116 --rc genhtml_function_coverage=1 00:07:56.116 --rc genhtml_legend=1 00:07:56.116 --rc geninfo_all_blocks=1 00:07:56.116 --rc geninfo_unexecuted_blocks=1 00:07:56.116 00:07:56.116 ' 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.116 --rc genhtml_branch_coverage=1 00:07:56.116 --rc genhtml_function_coverage=1 00:07:56.116 --rc genhtml_legend=1 00:07:56.116 --rc geninfo_all_blocks=1 00:07:56.116 --rc geninfo_unexecuted_blocks=1 00:07:56.116 00:07:56.116 ' 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.116 --rc genhtml_branch_coverage=1 00:07:56.116 --rc genhtml_function_coverage=1 00:07:56.116 --rc genhtml_legend=1 00:07:56.116 --rc geninfo_all_blocks=1 00:07:56.116 --rc geninfo_unexecuted_blocks=1 00:07:56.116 00:07:56.116 ' 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.116 --rc genhtml_branch_coverage=1 00:07:56.116 --rc genhtml_function_coverage=1 00:07:56.116 --rc genhtml_legend=1 00:07:56.116 --rc geninfo_all_blocks=1 00:07:56.116 --rc geninfo_unexecuted_blocks=1 00:07:56.116 00:07:56.116 ' 00:07:56.116 03:52:38 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.116 03:52:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:56.116 ************************************ 00:07:56.116 START TEST thread_poller_perf 00:07:56.116 ************************************ 00:07:56.116 03:52:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:56.375 [2024-12-07 03:52:38.868322] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:56.375 [2024-12-07 03:52:38.868583] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60563 ] 00:07:56.375 [2024-12-07 03:52:39.050642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.634 [2024-12-07 03:52:39.161131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.634 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:58.011 [2024-12-07T03:52:40.747Z] ====================================== 00:07:58.011 [2024-12-07T03:52:40.747Z] busy:2499857108 (cyc) 00:07:58.011 [2024-12-07T03:52:40.747Z] total_run_count: 399000 00:07:58.011 [2024-12-07T03:52:40.747Z] tsc_hz: 2490000000 (cyc) 00:07:58.011 [2024-12-07T03:52:40.747Z] ====================================== 00:07:58.011 [2024-12-07T03:52:40.747Z] poller_cost: 6265 (cyc), 2516 (nsec) 00:07:58.011 00:07:58.011 real 0m1.576s 00:07:58.011 user 0m1.345s 00:07:58.011 sys 0m0.121s 00:07:58.011 03:52:40 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.011 03:52:40 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:58.011 ************************************ 00:07:58.011 END TEST thread_poller_perf 00:07:58.011 ************************************ 00:07:58.012 03:52:40 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.012 03:52:40 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:58.012 03:52:40 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.012 03:52:40 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.012 ************************************ 00:07:58.012 START TEST thread_poller_perf 00:07:58.012 ************************************ 00:07:58.012 03:52:40 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:58.012 [2024-12-07 03:52:40.523334] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:58.012 [2024-12-07 03:52:40.523464] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60600 ] 00:07:58.012 [2024-12-07 03:52:40.706284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.271 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:58.271 [2024-12-07 03:52:40.823519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.648 [2024-12-07T03:52:42.384Z] ====================================== 00:07:59.648 [2024-12-07T03:52:42.384Z] busy:2494086702 (cyc) 00:07:59.648 [2024-12-07T03:52:42.384Z] total_run_count: 4930000 00:07:59.648 [2024-12-07T03:52:42.384Z] tsc_hz: 2490000000 (cyc) 00:07:59.648 [2024-12-07T03:52:42.384Z] ====================================== 00:07:59.648 [2024-12-07T03:52:42.384Z] poller_cost: 505 (cyc), 202 (nsec) 00:07:59.648 ************************************ 00:07:59.648 END TEST thread_poller_perf 00:07:59.648 ************************************ 00:07:59.648 00:07:59.648 real 0m1.582s 00:07:59.648 user 0m1.351s 00:07:59.648 sys 0m0.123s 00:07:59.648 03:52:42 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.648 03:52:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:59.648 03:52:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:59.648 00:07:59.648 real 0m3.541s 00:07:59.648 user 0m2.864s 00:07:59.648 sys 0m0.459s 00:07:59.648 ************************************ 00:07:59.648 END TEST thread 00:07:59.648 ************************************ 00:07:59.648 03:52:42 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.648 03:52:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:59.648 03:52:42 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:59.648 03:52:42 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.648 03:52:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:59.648 03:52:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.648 03:52:42 -- common/autotest_common.sh@10 -- # set +x 00:07:59.648 ************************************ 00:07:59.648 START TEST app_cmdline 00:07:59.648 ************************************ 00:07:59.648 03:52:42 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.648 * Looking for test storage... 00:07:59.648 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.648 03:52:42 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:59.648 03:52:42 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:59.648 03:52:42 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:59.908 03:52:42 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:59.908 03:52:42 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:59.909 03:52:42 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:59.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.909 --rc genhtml_branch_coverage=1 00:07:59.909 --rc genhtml_function_coverage=1 00:07:59.909 --rc genhtml_legend=1 00:07:59.909 --rc geninfo_all_blocks=1 00:07:59.909 --rc geninfo_unexecuted_blocks=1 00:07:59.909 00:07:59.909 ' 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:59.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.909 --rc genhtml_branch_coverage=1 00:07:59.909 --rc genhtml_function_coverage=1 00:07:59.909 --rc genhtml_legend=1 00:07:59.909 --rc geninfo_all_blocks=1 00:07:59.909 --rc geninfo_unexecuted_blocks=1 00:07:59.909 00:07:59.909 ' 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:59.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.909 --rc genhtml_branch_coverage=1 00:07:59.909 --rc genhtml_function_coverage=1 00:07:59.909 --rc genhtml_legend=1 00:07:59.909 --rc geninfo_all_blocks=1 00:07:59.909 --rc geninfo_unexecuted_blocks=1 00:07:59.909 00:07:59.909 ' 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:59.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:59.909 --rc genhtml_branch_coverage=1 00:07:59.909 --rc genhtml_function_coverage=1 00:07:59.909 --rc genhtml_legend=1 00:07:59.909 --rc geninfo_all_blocks=1 00:07:59.909 --rc geninfo_unexecuted_blocks=1 00:07:59.909 00:07:59.909 ' 00:07:59.909 03:52:42 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.909 03:52:42 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.909 03:52:42 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60689 00:07:59.909 03:52:42 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60689 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60689 ']' 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.909 03:52:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.909 [2024-12-07 03:52:42.509410] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:07:59.909 [2024-12-07 03:52:42.509701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60689 ] 00:08:00.168 [2024-12-07 03:52:42.692544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.168 [2024-12-07 03:52:42.805120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.105 03:52:43 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:01.105 03:52:43 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:01.105 03:52:43 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:01.364 { 00:08:01.364 "version": "SPDK v25.01-pre git sha1 42416bc2c", 00:08:01.364 "fields": { 00:08:01.364 "major": 25, 00:08:01.364 "minor": 1, 00:08:01.364 "patch": 0, 00:08:01.364 "suffix": "-pre", 00:08:01.364 "commit": "42416bc2c" 00:08:01.364 } 00:08:01.364 } 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:01.364 03:52:43 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:01.364 03:52:43 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.623 request: 00:08:01.623 { 00:08:01.623 "method": "env_dpdk_get_mem_stats", 00:08:01.623 "req_id": 1 00:08:01.623 } 00:08:01.623 Got JSON-RPC error response 00:08:01.623 response: 00:08:01.623 { 00:08:01.623 "code": -32601, 00:08:01.623 "message": "Method not found" 00:08:01.623 } 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:01.623 03:52:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60689 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60689 ']' 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60689 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60689 00:08:01.623 killing process with pid 60689 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60689' 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@973 -- # kill 60689 00:08:01.623 03:52:44 app_cmdline -- common/autotest_common.sh@978 -- # wait 60689 00:08:04.157 00:08:04.157 real 0m4.386s 00:08:04.157 user 0m4.531s 00:08:04.157 sys 0m0.674s 00:08:04.157 ************************************ 00:08:04.157 END TEST app_cmdline 00:08:04.157 ************************************ 00:08:04.157 03:52:46 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.157 03:52:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:04.157 03:52:46 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:04.157 03:52:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:04.157 03:52:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.157 03:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:04.157 ************************************ 00:08:04.157 START TEST version 00:08:04.157 ************************************ 00:08:04.157 03:52:46 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:04.157 * Looking for test storage... 00:08:04.157 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:04.157 03:52:46 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.157 03:52:46 version -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.157 03:52:46 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.157 03:52:46 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.157 03:52:46 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.157 03:52:46 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.157 03:52:46 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.157 03:52:46 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.157 03:52:46 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.157 03:52:46 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.157 03:52:46 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.157 03:52:46 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.157 03:52:46 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.157 03:52:46 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.157 03:52:46 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.157 03:52:46 version -- scripts/common.sh@344 -- # case "$op" in 00:08:04.157 03:52:46 version -- scripts/common.sh@345 -- # : 1 00:08:04.157 03:52:46 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.157 03:52:46 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.157 03:52:46 version -- scripts/common.sh@365 -- # decimal 1 00:08:04.157 03:52:46 version -- scripts/common.sh@353 -- # local d=1 00:08:04.157 03:52:46 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.157 03:52:46 version -- scripts/common.sh@355 -- # echo 1 00:08:04.157 03:52:46 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.157 03:52:46 version -- scripts/common.sh@366 -- # decimal 2 00:08:04.157 03:52:46 version -- scripts/common.sh@353 -- # local d=2 00:08:04.157 03:52:46 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.157 03:52:46 version -- scripts/common.sh@355 -- # echo 2 00:08:04.157 03:52:46 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.157 03:52:46 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.158 03:52:46 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.158 03:52:46 version -- scripts/common.sh@368 -- # return 0 00:08:04.158 03:52:46 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.158 03:52:46 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.158 --rc genhtml_branch_coverage=1 00:08:04.158 --rc genhtml_function_coverage=1 00:08:04.158 --rc genhtml_legend=1 00:08:04.158 --rc geninfo_all_blocks=1 00:08:04.158 --rc geninfo_unexecuted_blocks=1 00:08:04.158 00:08:04.158 ' 00:08:04.158 03:52:46 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.158 --rc genhtml_branch_coverage=1 00:08:04.158 --rc genhtml_function_coverage=1 00:08:04.158 --rc genhtml_legend=1 00:08:04.158 --rc geninfo_all_blocks=1 00:08:04.158 --rc geninfo_unexecuted_blocks=1 00:08:04.158 00:08:04.158 ' 00:08:04.158 03:52:46 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.158 --rc genhtml_branch_coverage=1 00:08:04.158 --rc genhtml_function_coverage=1 00:08:04.158 --rc genhtml_legend=1 00:08:04.158 --rc geninfo_all_blocks=1 00:08:04.158 --rc geninfo_unexecuted_blocks=1 00:08:04.158 00:08:04.158 ' 00:08:04.158 03:52:46 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.158 --rc genhtml_branch_coverage=1 00:08:04.158 --rc genhtml_function_coverage=1 00:08:04.158 --rc genhtml_legend=1 00:08:04.158 --rc geninfo_all_blocks=1 00:08:04.158 --rc geninfo_unexecuted_blocks=1 00:08:04.158 00:08:04.158 ' 00:08:04.158 03:52:46 version -- app/version.sh@17 -- # get_header_version major 00:08:04.158 03:52:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.158 03:52:46 version -- app/version.sh@14 -- # cut -f2 00:08:04.158 03:52:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.418 03:52:46 version -- app/version.sh@17 -- # major=25 00:08:04.418 03:52:46 version -- app/version.sh@18 -- # get_header_version minor 00:08:04.418 03:52:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.418 03:52:46 version -- app/version.sh@14 -- # cut -f2 00:08:04.418 03:52:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.418 03:52:46 version -- app/version.sh@18 -- # minor=1 00:08:04.418 03:52:46 version -- app/version.sh@19 -- # get_header_version patch 00:08:04.418 03:52:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.418 03:52:46 version -- app/version.sh@14 -- # cut -f2 00:08:04.418 03:52:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.418 03:52:46 version -- app/version.sh@19 -- # patch=0 00:08:04.418 03:52:46 version -- app/version.sh@20 -- # get_header_version suffix 00:08:04.418 03:52:46 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:04.418 03:52:46 version -- app/version.sh@14 -- # cut -f2 00:08:04.418 03:52:46 version -- app/version.sh@14 -- # tr -d '"' 00:08:04.418 03:52:46 version -- app/version.sh@20 -- # suffix=-pre 00:08:04.418 03:52:46 version -- app/version.sh@22 -- # version=25.1 00:08:04.418 03:52:46 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:04.418 03:52:46 version -- app/version.sh@28 -- # version=25.1rc0 00:08:04.418 03:52:46 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:04.418 03:52:46 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:04.418 03:52:46 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:04.418 03:52:46 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:04.418 00:08:04.418 real 0m0.334s 00:08:04.418 user 0m0.193s 00:08:04.418 sys 0m0.199s 00:08:04.418 03:52:46 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.418 03:52:46 version -- common/autotest_common.sh@10 -- # set +x 00:08:04.418 ************************************ 00:08:04.418 END TEST version 00:08:04.418 ************************************ 00:08:04.418 03:52:47 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:04.418 03:52:47 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:08:04.418 03:52:47 -- spdk/autotest.sh@194 -- # uname -s 00:08:04.418 03:52:47 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:08:04.418 03:52:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:04.418 03:52:47 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:08:04.419 03:52:47 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:08:04.419 03:52:47 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:04.419 03:52:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.419 03:52:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.419 03:52:47 -- common/autotest_common.sh@10 -- # set +x 00:08:04.419 ************************************ 00:08:04.419 START TEST blockdev_nvme 00:08:04.419 ************************************ 00:08:04.419 03:52:47 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:04.679 * Looking for test storage... 00:08:04.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:04.679 03:52:47 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.679 03:52:47 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.679 03:52:47 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.679 03:52:47 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.679 03:52:47 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:08:04.679 03:52:47 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.679 03:52:47 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.679 --rc genhtml_branch_coverage=1 00:08:04.679 --rc genhtml_function_coverage=1 00:08:04.679 --rc genhtml_legend=1 00:08:04.679 --rc geninfo_all_blocks=1 00:08:04.679 --rc geninfo_unexecuted_blocks=1 00:08:04.679 00:08:04.679 ' 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.680 --rc genhtml_branch_coverage=1 00:08:04.680 --rc genhtml_function_coverage=1 00:08:04.680 --rc genhtml_legend=1 00:08:04.680 --rc geninfo_all_blocks=1 00:08:04.680 --rc geninfo_unexecuted_blocks=1 00:08:04.680 00:08:04.680 ' 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.680 --rc genhtml_branch_coverage=1 00:08:04.680 --rc genhtml_function_coverage=1 00:08:04.680 --rc genhtml_legend=1 00:08:04.680 --rc geninfo_all_blocks=1 00:08:04.680 --rc geninfo_unexecuted_blocks=1 00:08:04.680 00:08:04.680 ' 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.680 --rc genhtml_branch_coverage=1 00:08:04.680 --rc genhtml_function_coverage=1 00:08:04.680 --rc genhtml_legend=1 00:08:04.680 --rc geninfo_all_blocks=1 00:08:04.680 --rc geninfo_unexecuted_blocks=1 00:08:04.680 00:08:04.680 ' 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:04.680 03:52:47 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60883 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60883 00:08:04.680 03:52:47 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60883 ']' 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.680 03:52:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.939 [2024-12-07 03:52:47.433827] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:04.939 [2024-12-07 03:52:47.434152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60883 ] 00:08:04.939 [2024-12-07 03:52:47.615212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.199 [2024-12-07 03:52:47.731196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.138 03:52:48 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.138 03:52:48 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:08:06.138 03:52:48 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:06.138 03:52:48 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:08:06.138 03:52:48 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:06.138 03:52:48 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:06.138 03:52:48 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:06.138 03:52:48 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:06.138 03:52:48 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.138 03:52:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.398 03:52:49 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:06.398 03:52:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.657 03:52:49 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.657 03:52:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:06.658 03:52:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c7ed8591-161c-4a8f-b59a-bacb0b014dc1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c7ed8591-161c-4a8f-b59a-bacb0b014dc1",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "2173a599-b88b-4796-979a-cbe9da1e1d22"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "2173a599-b88b-4796-979a-cbe9da1e1d22",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "8be5960d-e8b2-42b1-bf16-416b94aeb96f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8be5960d-e8b2-42b1-bf16-416b94aeb96f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "fd042f59-d001-4435-8068-4c5ced8b65a8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fd042f59-d001-4435-8068-4c5ced8b65a8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4d190a3a-44c9-4c2e-be4a-585ec3d1583e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4d190a3a-44c9-4c2e-be4a-585ec3d1583e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cc3ee294-710d-443d-8289-630df123d239"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cc3ee294-710d-443d-8289-630df123d239",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:06.658 03:52:49 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:06.658 03:52:49 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:06.658 03:52:49 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:06.658 03:52:49 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:06.658 03:52:49 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 60883 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60883 ']' 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60883 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60883 00:08:06.658 killing process with pid 60883 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60883' 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60883 00:08:06.658 03:52:49 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60883 00:08:09.197 03:52:51 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.197 03:52:51 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:09.197 03:52:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:08:09.197 03:52:51 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.197 03:52:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.197 ************************************ 00:08:09.197 START TEST bdev_hello_world 00:08:09.197 ************************************ 00:08:09.197 03:52:51 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:09.197 [2024-12-07 03:52:51.776777] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:09.197 [2024-12-07 03:52:51.777063] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60978 ] 00:08:09.456 [2024-12-07 03:52:51.958492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.456 [2024-12-07 03:52:52.076354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.041 [2024-12-07 03:52:52.766849] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:10.041 [2024-12-07 03:52:52.767117] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:10.041 [2024-12-07 03:52:52.767157] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:10.041 [2024-12-07 03:52:52.770204] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:10.041 [2024-12-07 03:52:52.770840] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:10.041 [2024-12-07 03:52:52.770877] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:10.041 [2024-12-07 03:52:52.771127] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:10.041 00:08:10.041 [2024-12-07 03:52:52.771152] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:11.418 00:08:11.418 real 0m2.219s 00:08:11.418 user 0m1.846s 00:08:11.418 sys 0m0.263s 00:08:11.418 ************************************ 00:08:11.418 END TEST bdev_hello_world 00:08:11.418 ************************************ 00:08:11.418 03:52:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.418 03:52:53 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:11.418 03:52:53 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:08:11.418 03:52:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.418 03:52:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.418 03:52:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.418 ************************************ 00:08:11.418 START TEST bdev_bounds 00:08:11.418 ************************************ 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61020 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61020' 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:11.418 Process bdevio pid: 61020 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61020 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61020 ']' 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.418 03:52:53 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:11.418 [2024-12-07 03:52:54.079950] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:11.418 [2024-12-07 03:52:54.080074] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61020 ] 00:08:11.675 [2024-12-07 03:52:54.261997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:11.675 [2024-12-07 03:52:54.377763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.675 [2024-12-07 03:52:54.377881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.675 [2024-12-07 03:52:54.377910] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:12.618 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.618 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:08:12.618 03:52:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:12.618 I/O targets: 00:08:12.618 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:12.618 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:12.618 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:12.618 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:12.618 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:12.618 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:12.618 00:08:12.618 00:08:12.618 CUnit - A unit testing framework for C - Version 2.1-3 00:08:12.618 http://cunit.sourceforge.net/ 00:08:12.618 00:08:12.618 00:08:12.618 Suite: bdevio tests on: Nvme3n1 00:08:12.618 Test: blockdev write read block ...passed 00:08:12.618 Test: blockdev write zeroes read block ...passed 00:08:12.618 Test: blockdev write zeroes read no split ...passed 00:08:12.618 Test: blockdev write zeroes read split ...passed 00:08:12.618 Test: blockdev write zeroes read split partial ...passed 00:08:12.618 Test: blockdev reset ...[2024-12-07 03:52:55.254361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:12.618 [2024-12-07 03:52:55.258440] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller spassed 00:08:12.618 Test: blockdev write read 8 blocks ...uccessful. 00:08:12.618 passed 00:08:12.618 Test: blockdev write read size > 128k ...passed 00:08:12.618 Test: blockdev write read invalid size ...passed 00:08:12.618 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:12.618 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:12.618 Test: blockdev write read max offset ...passed 00:08:12.618 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:12.618 Test: blockdev writev readv 8 blocks ...passed 00:08:12.618 Test: blockdev writev readv 30 x 1block ...passed 00:08:12.618 Test: blockdev writev readv block ...passed 00:08:12.618 Test: blockdev writev readv size > 128k ...passed 00:08:12.618 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:12.618 Test: blockdev comparev and writev ...[2024-12-07 03:52:55.269977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bbc0a000 len:0x1000 00:08:12.618 [2024-12-07 03:52:55.270167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:12.618 passed 00:08:12.618 Test: blockdev nvme passthru rw ...passed 00:08:12.618 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:52:55.271535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:12.618 [2024-12-07 03:52:55.271739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:12.618 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:12.618 passed 00:08:12.618 Test: blockdev copy ...passed 00:08:12.618 Suite: bdevio tests on: Nvme2n3 00:08:12.618 Test: blockdev write read block ...passed 00:08:12.618 Test: blockdev write zeroes read block ...passed 00:08:12.618 Test: blockdev write zeroes read no split ...passed 00:08:12.618 Test: blockdev write zeroes read split ...passed 00:08:12.877 Test: blockdev write zeroes read split partial ...passed 00:08:12.877 Test: blockdev reset ...[2024-12-07 03:52:55.349987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:12.877 passed 00:08:12.877 Test: blockdev write read 8 blocks ...[2024-12-07 03:52:55.354188] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:12.877 passed 00:08:12.877 Test: blockdev write read size > 128k ...passed 00:08:12.877 Test: blockdev write read invalid size ...passed 00:08:12.877 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:12.877 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:12.877 Test: blockdev write read max offset ...passed 00:08:12.877 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:12.877 Test: blockdev writev readv 8 blocks ...passed 00:08:12.877 Test: blockdev writev readv 30 x 1block ...passed 00:08:12.877 Test: blockdev writev readv block ...passed 00:08:12.877 Test: blockdev writev readv size > 128k ...passed 00:08:12.877 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:12.877 Test: blockdev comparev and writev ...[2024-12-07 03:52:55.364262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29ee06000 len:0x1000 00:08:12.877 [2024-12-07 03:52:55.364315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:12.877 passed 00:08:12.877 Test: blockdev nvme passthru rw ...passed 00:08:12.877 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:52:55.365172] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:08:12.877 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:08:12.877 [2024-12-07 03:52:55.365333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:12.877 passed 00:08:12.877 Test: blockdev copy ...passed 00:08:12.877 Suite: bdevio tests on: Nvme2n2 00:08:12.877 Test: blockdev write read block ...passed 00:08:12.877 Test: blockdev write zeroes read block ...passed 00:08:12.877 Test: blockdev write zeroes read no split ...passed 00:08:12.877 Test: blockdev write zeroes read split ...passed 00:08:12.877 Test: blockdev write zeroes read split partial ...passed 00:08:12.877 Test: blockdev reset ...[2024-12-07 03:52:55.450155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:12.877 [2024-12-07 03:52:55.454435] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:08:12.877 Test: blockdev write read 8 blocks ...uccessful. 00:08:12.877 passed 00:08:12.878 Test: blockdev write read size > 128k ...passed 00:08:12.878 Test: blockdev write read invalid size ...passed 00:08:12.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:12.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:12.878 Test: blockdev write read max offset ...passed 00:08:12.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:12.878 Test: blockdev writev readv 8 blocks ...passed 00:08:12.878 Test: blockdev writev readv 30 x 1block ...passed 00:08:12.878 Test: blockdev writev readv block ...passed 00:08:12.878 Test: blockdev writev readv size > 128k ...passed 00:08:12.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:12.878 Test: blockdev comparev and writev ...[2024-12-07 03:52:55.465447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc3c000 len:0x1000 00:08:12.878 [2024-12-07 03:52:55.465647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:12.878 passed 00:08:12.878 Test: blockdev nvme passthru rw ...passed 00:08:12.878 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:52:55.466980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:12.878 passed[2024-12-07 03:52:55.467144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:12.878 00:08:12.878 Test: blockdev nvme admin passthru ...passed 00:08:12.878 Test: blockdev copy ...passed 00:08:12.878 Suite: bdevio tests on: Nvme2n1 00:08:12.878 Test: blockdev write read block ...passed 00:08:12.878 Test: blockdev write zeroes read block ...passed 00:08:12.878 Test: blockdev write zeroes read no split ...passed 00:08:12.878 Test: blockdev write zeroes read split ...passed 00:08:12.878 Test: blockdev write zeroes read split partial ...passed 00:08:12.878 Test: blockdev reset ...[2024-12-07 03:52:55.548269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:12.878 passed 00:08:12.878 Test: blockdev write read 8 blocks ...[2024-12-07 03:52:55.552853] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:12.878 passed 00:08:12.878 Test: blockdev write read size > 128k ...passed 00:08:12.878 Test: blockdev write read invalid size ...passed 00:08:12.878 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:12.878 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:12.878 Test: blockdev write read max offset ...passed 00:08:12.878 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:12.878 Test: blockdev writev readv 8 blocks ...passed 00:08:12.878 Test: blockdev writev readv 30 x 1block ...passed 00:08:12.878 Test: blockdev writev readv block ...passed 00:08:12.878 Test: blockdev writev readv size > 128k ...passed 00:08:12.878 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:12.878 Test: blockdev comparev and writev ...[2024-12-07 03:52:55.562336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc38000 len:0x1000 00:08:12.878 [2024-12-07 03:52:55.562394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:12.878 passed 00:08:12.878 Test: blockdev nvme passthru rw ...passed 00:08:12.878 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:52:55.563300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:08:12.878 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:12.878 [2024-12-07 03:52:55.563442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:12.878 passed 00:08:12.878 Test: blockdev copy ...passed 00:08:12.878 Suite: bdevio tests on: Nvme1n1 00:08:12.878 Test: blockdev write read block ...passed 00:08:12.878 Test: blockdev write zeroes read block ...passed 00:08:12.878 Test: blockdev write zeroes read no split ...passed 00:08:13.136 Test: blockdev write zeroes read split ...passed 00:08:13.136 Test: blockdev write zeroes read split partial ...passed 00:08:13.136 Test: blockdev reset ...[2024-12-07 03:52:55.645982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:13.136 [2024-12-07 03:52:55.649885] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:08:13.136 Test: blockdev write read 8 blocks ...uccessful. 00:08:13.136 passed 00:08:13.136 Test: blockdev write read size > 128k ...passed 00:08:13.136 Test: blockdev write read invalid size ...passed 00:08:13.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:13.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:13.136 Test: blockdev write read max offset ...passed 00:08:13.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:13.136 Test: blockdev writev readv 8 blocks ...passed 00:08:13.136 Test: blockdev writev readv 30 x 1block ...passed 00:08:13.136 Test: blockdev writev readv block ...passed 00:08:13.136 Test: blockdev writev readv size > 128k ...passed 00:08:13.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:13.136 Test: blockdev comparev and writev ...[2024-12-07 03:52:55.660523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cbc34000 len:0x1000 00:08:13.136 [2024-12-07 03:52:55.660724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:13.136 passed 00:08:13.136 Test: blockdev nvme passthru rw ...passed 00:08:13.136 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:52:55.661869] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:13.136 [2024-12-07 03:52:55.662068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:08:13.136 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:08:13.136 passed 00:08:13.136 Test: blockdev copy ...passed 00:08:13.136 Suite: bdevio tests on: Nvme0n1 00:08:13.136 Test: blockdev write read block ...passed 00:08:13.136 Test: blockdev write zeroes read block ...passed 00:08:13.136 Test: blockdev write zeroes read no split ...passed 00:08:13.136 Test: blockdev write zeroes read split ...passed 00:08:13.136 Test: blockdev write zeroes read split partial ...passed 00:08:13.136 Test: blockdev reset ...[2024-12-07 03:52:55.737977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:13.136 [2024-12-07 03:52:55.741751] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:08:13.136 Test: blockdev write read 8 blocks ...uccessful. 00:08:13.136 passed 00:08:13.136 Test: blockdev write read size > 128k ...passed 00:08:13.136 Test: blockdev write read invalid size ...passed 00:08:13.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:13.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:13.136 Test: blockdev write read max offset ...passed 00:08:13.136 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:13.136 Test: blockdev writev readv 8 blocks ...passed 00:08:13.136 Test: blockdev writev readv 30 x 1block ...passed 00:08:13.136 Test: blockdev writev readv block ...passed 00:08:13.136 Test: blockdev writev readv size > 128k ...passed 00:08:13.136 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:13.136 Test: blockdev comparev and writev ...passed 00:08:13.136 Test: blockdev nvme passthru rw ...[2024-12-07 03:52:55.751401] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:13.136 separate metadata which is not supported yet. 00:08:13.136 passed 00:08:13.136 Test: blockdev nvme passthru vendor specific ...passed 00:08:13.136 Test: blockdev nvme admin passthru ...[2024-12-07 03:52:55.752068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:13.136 [2024-12-07 03:52:55.752124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:13.136 passed 00:08:13.136 Test: blockdev copy ...passed 00:08:13.136 00:08:13.136 Run Summary: Type Total Ran Passed Failed Inactive 00:08:13.136 suites 6 6 n/a 0 0 00:08:13.136 tests 138 138 138 0 0 00:08:13.136 asserts 893 893 893 0 n/a 00:08:13.136 00:08:13.136 Elapsed time = 1.556 seconds 00:08:13.136 0 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61020 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61020 ']' 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61020 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61020 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61020' 00:08:13.136 killing process with pid 61020 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61020 00:08:13.136 03:52:55 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61020 00:08:14.510 03:52:56 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:14.510 00:08:14.510 real 0m2.893s 00:08:14.510 user 0m7.365s 00:08:14.510 sys 0m0.444s 00:08:14.510 03:52:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:14.510 ************************************ 00:08:14.510 END TEST bdev_bounds 00:08:14.510 ************************************ 00:08:14.510 03:52:56 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 03:52:56 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:14.510 03:52:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:14.510 03:52:56 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:14.510 03:52:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 ************************************ 00:08:14.510 START TEST bdev_nbd 00:08:14.510 ************************************ 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61085 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61085 /var/tmp/spdk-nbd.sock 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61085 ']' 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:14.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:14.510 03:52:56 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:14.510 [2024-12-07 03:52:57.070569] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:14.510 [2024-12-07 03:52:57.070690] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.768 [2024-12-07 03:52:57.254668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.768 [2024-12-07 03:52:57.365247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.334 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.334 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:08:15.334 03:52:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:15.334 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:15.335 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.593 1+0 records in 00:08:15.593 1+0 records out 00:08:15.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575808 s, 7.1 MB/s 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:15.593 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:15.852 1+0 records in 00:08:15.852 1+0 records out 00:08:15.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000704391 s, 5.8 MB/s 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:15.852 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:16.111 1+0 records in 00:08:16.111 1+0 records out 00:08:16.111 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00185581 s, 2.2 MB/s 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:16.111 03:52:58 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:16.370 1+0 records in 00:08:16.370 1+0 records out 00:08:16.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000746647 s, 5.5 MB/s 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:16.370 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:16.629 1+0 records in 00:08:16.629 1+0 records out 00:08:16.629 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00081525 s, 5.0 MB/s 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:16.629 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:16.889 1+0 records in 00:08:16.889 1+0 records out 00:08:16.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674744 s, 6.1 MB/s 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:16.889 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:17.148 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:17.148 { 00:08:17.148 "nbd_device": "/dev/nbd0", 00:08:17.148 "bdev_name": "Nvme0n1" 00:08:17.148 }, 00:08:17.148 { 00:08:17.148 "nbd_device": "/dev/nbd1", 00:08:17.149 "bdev_name": "Nvme1n1" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd2", 00:08:17.149 "bdev_name": "Nvme2n1" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd3", 00:08:17.149 "bdev_name": "Nvme2n2" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd4", 00:08:17.149 "bdev_name": "Nvme2n3" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd5", 00:08:17.149 "bdev_name": "Nvme3n1" 00:08:17.149 } 00:08:17.149 ]' 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd0", 00:08:17.149 "bdev_name": "Nvme0n1" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd1", 00:08:17.149 "bdev_name": "Nvme1n1" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd2", 00:08:17.149 "bdev_name": "Nvme2n1" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd3", 00:08:17.149 "bdev_name": "Nvme2n2" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd4", 00:08:17.149 "bdev_name": "Nvme2n3" 00:08:17.149 }, 00:08:17.149 { 00:08:17.149 "nbd_device": "/dev/nbd5", 00:08:17.149 "bdev_name": "Nvme3n1" 00:08:17.149 } 00:08:17.149 ]' 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.149 03:52:59 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.409 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.668 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:17.928 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:18.187 03:53:00 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.447 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.706 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:18.966 /dev/nbd0 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:18.966 1+0 records in 00:08:18.966 1+0 records out 00:08:18.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000543959 s, 7.5 MB/s 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:18.966 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:19.226 /dev/nbd1 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:19.226 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.227 1+0 records in 00:08:19.227 1+0 records out 00:08:19.227 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654824 s, 6.3 MB/s 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:19.227 03:53:01 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:19.486 /dev/nbd10 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.486 1+0 records in 00:08:19.486 1+0 records out 00:08:19.486 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675149 s, 6.1 MB/s 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:19.486 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:19.745 /dev/nbd11 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.745 1+0 records in 00:08:19.745 1+0 records out 00:08:19.745 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778795 s, 5.3 MB/s 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:19.745 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:20.005 /dev/nbd12 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.005 1+0 records in 00:08:20.005 1+0 records out 00:08:20.005 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000983977 s, 4.2 MB/s 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:20.005 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:20.265 /dev/nbd13 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.265 1+0 records in 00:08:20.265 1+0 records out 00:08:20.265 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106228 s, 3.9 MB/s 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:20.265 03:53:02 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd0", 00:08:20.525 "bdev_name": "Nvme0n1" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd1", 00:08:20.525 "bdev_name": "Nvme1n1" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd10", 00:08:20.525 "bdev_name": "Nvme2n1" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd11", 00:08:20.525 "bdev_name": "Nvme2n2" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd12", 00:08:20.525 "bdev_name": "Nvme2n3" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd13", 00:08:20.525 "bdev_name": "Nvme3n1" 00:08:20.525 } 00:08:20.525 ]' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd0", 00:08:20.525 "bdev_name": "Nvme0n1" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd1", 00:08:20.525 "bdev_name": "Nvme1n1" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd10", 00:08:20.525 "bdev_name": "Nvme2n1" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd11", 00:08:20.525 "bdev_name": "Nvme2n2" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd12", 00:08:20.525 "bdev_name": "Nvme2n3" 00:08:20.525 }, 00:08:20.525 { 00:08:20.525 "nbd_device": "/dev/nbd13", 00:08:20.525 "bdev_name": "Nvme3n1" 00:08:20.525 } 00:08:20.525 ]' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:20.525 /dev/nbd1 00:08:20.525 /dev/nbd10 00:08:20.525 /dev/nbd11 00:08:20.525 /dev/nbd12 00:08:20.525 /dev/nbd13' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:20.525 /dev/nbd1 00:08:20.525 /dev/nbd10 00:08:20.525 /dev/nbd11 00:08:20.525 /dev/nbd12 00:08:20.525 /dev/nbd13' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:20.525 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:20.785 256+0 records in 00:08:20.785 256+0 records out 00:08:20.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00574674 s, 182 MB/s 00:08:20.785 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:20.785 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:20.785 256+0 records in 00:08:20.785 256+0 records out 00:08:20.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125509 s, 8.4 MB/s 00:08:20.785 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:20.785 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:21.044 256+0 records in 00:08:21.044 256+0 records out 00:08:21.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137423 s, 7.6 MB/s 00:08:21.044 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.044 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:21.044 256+0 records in 00:08:21.044 256+0 records out 00:08:21.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13379 s, 7.8 MB/s 00:08:21.044 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.044 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:21.304 256+0 records in 00:08:21.304 256+0 records out 00:08:21.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126762 s, 8.3 MB/s 00:08:21.304 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.304 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:21.304 256+0 records in 00:08:21.304 256+0 records out 00:08:21.304 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132388 s, 7.9 MB/s 00:08:21.304 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:21.304 03:53:03 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:21.563 256+0 records in 00:08:21.563 256+0 records out 00:08:21.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.134673 s, 7.8 MB/s 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.564 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:21.823 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.083 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.342 03:53:04 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.342 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.602 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.860 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:23.119 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:23.378 malloc_lvol_verify 00:08:23.378 03:53:05 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:23.378 8cd134d5-da6d-4efe-b1ee-27ec03f8bcf4 00:08:23.637 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:23.637 9548ef54-f380-45dc-b72e-895fe156ec21 00:08:23.637 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:23.896 /dev/nbd0 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:23.896 mke2fs 1.47.0 (5-Feb-2023) 00:08:23.896 Discarding device blocks: 0/4096 done 00:08:23.896 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:23.896 00:08:23.896 Allocating group tables: 0/1 done 00:08:23.896 Writing inode tables: 0/1 done 00:08:23.896 Creating journal (1024 blocks): done 00:08:23.896 Writing superblocks and filesystem accounting information: 0/1 done 00:08:23.896 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.896 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61085 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61085 ']' 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61085 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61085 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.156 killing process with pid 61085 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61085' 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61085 00:08:24.156 03:53:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61085 00:08:25.599 03:53:08 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:25.599 00:08:25.599 real 0m11.169s 00:08:25.599 user 0m14.389s 00:08:25.599 sys 0m4.519s 00:08:25.599 03:53:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.599 ************************************ 00:08:25.599 03:53:08 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:25.599 END TEST bdev_nbd 00:08:25.599 ************************************ 00:08:25.599 03:53:08 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:08:25.599 03:53:08 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:08:25.599 skipping fio tests on NVMe due to multi-ns failures. 00:08:25.599 03:53:08 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:25.599 03:53:08 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:25.599 03:53:08 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:25.599 03:53:08 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:25.599 03:53:08 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:25.599 03:53:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:25.599 ************************************ 00:08:25.599 START TEST bdev_verify 00:08:25.599 ************************************ 00:08:25.599 03:53:08 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:25.599 [2024-12-07 03:53:08.291144] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:25.599 [2024-12-07 03:53:08.291272] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61472 ] 00:08:25.859 [2024-12-07 03:53:08.472301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:25.859 [2024-12-07 03:53:08.585942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.859 [2024-12-07 03:53:08.585982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.797 Running I/O for 5 seconds... 00:08:29.133 16128.00 IOPS, 63.00 MiB/s [2024-12-07T03:53:12.803Z] 16160.00 IOPS, 63.12 MiB/s [2024-12-07T03:53:13.734Z] 16448.00 IOPS, 64.25 MiB/s [2024-12-07T03:53:14.670Z] 16208.00 IOPS, 63.31 MiB/s [2024-12-07T03:53:14.670Z] 16140.80 IOPS, 63.05 MiB/s 00:08:31.934 Latency(us) 00:08:31.934 [2024-12-07T03:53:14.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.934 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0x0 length 0xbd0bd 00:08:31.934 Nvme0n1 : 5.06 1213.28 4.74 0.00 0.00 105239.48 16634.04 85065.20 00:08:31.934 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:31.934 Nvme0n1 : 5.06 1441.86 5.63 0.00 0.00 88562.28 16212.92 85486.32 00:08:31.934 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0x0 length 0xa0000 00:08:31.934 Nvme1n1 : 5.07 1212.40 4.74 0.00 0.00 105080.99 20108.23 85907.43 00:08:31.934 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0xa0000 length 0xa0000 00:08:31.934 Nvme1n1 : 5.06 1441.31 5.63 0.00 0.00 88478.19 19160.73 81696.28 00:08:31.934 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0x0 length 0x80000 00:08:31.934 Nvme2n1 : 5.07 1211.58 4.73 0.00 0.00 105029.25 18318.50 87591.89 00:08:31.934 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0x80000 length 0x80000 00:08:31.934 Nvme2n1 : 5.06 1440.77 5.63 0.00 0.00 88247.09 21371.58 75800.67 00:08:31.934 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0x0 length 0x80000 00:08:31.934 Nvme2n2 : 5.07 1211.24 4.73 0.00 0.00 104905.28 18529.05 90960.81 00:08:31.934 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.934 Verification LBA range: start 0x80000 length 0x80000 00:08:31.935 Nvme2n2 : 5.07 1439.97 5.62 0.00 0.00 88162.99 20950.46 72852.87 00:08:31.935 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.935 Verification LBA range: start 0x0 length 0x80000 00:08:31.935 Nvme2n3 : 5.07 1210.93 4.73 0.00 0.00 104763.99 18844.89 92224.15 00:08:31.935 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.935 Verification LBA range: start 0x80000 length 0x80000 00:08:31.935 Nvme2n3 : 5.07 1439.53 5.62 0.00 0.00 88031.81 20318.79 71589.53 00:08:31.935 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:31.935 Verification LBA range: start 0x0 length 0x20000 00:08:31.935 Nvme3n1 : 5.08 1210.45 4.73 0.00 0.00 104635.51 12844.00 85486.32 00:08:31.935 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:31.935 Verification LBA range: start 0x20000 length 0x20000 00:08:31.935 Nvme3n1 : 5.08 1450.05 5.66 0.00 0.00 87345.63 2566.17 70747.30 00:08:31.935 [2024-12-07T03:53:14.671Z] =================================================================================================================== 00:08:31.935 [2024-12-07T03:53:14.671Z] Total : 15923.39 62.20 0.00 0.00 95813.30 2566.17 92224.15 00:08:33.317 00:08:33.317 real 0m7.531s 00:08:33.317 user 0m13.885s 00:08:33.317 sys 0m0.334s 00:08:33.317 03:53:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.317 03:53:15 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:33.317 ************************************ 00:08:33.317 END TEST bdev_verify 00:08:33.317 ************************************ 00:08:33.317 03:53:15 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:33.317 03:53:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:33.317 03:53:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.317 03:53:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:33.317 ************************************ 00:08:33.317 START TEST bdev_verify_big_io 00:08:33.317 ************************************ 00:08:33.317 03:53:15 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:33.317 [2024-12-07 03:53:15.904403] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:33.317 [2024-12-07 03:53:15.904527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61570 ] 00:08:33.577 [2024-12-07 03:53:16.090256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:33.577 [2024-12-07 03:53:16.200871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.577 [2024-12-07 03:53:16.200894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.516 Running I/O for 5 seconds... 00:08:39.191 2018.00 IOPS, 126.12 MiB/s [2024-12-07T03:53:22.867Z] 3208.50 IOPS, 200.53 MiB/s [2024-12-07T03:53:23.127Z] 3974.67 IOPS, 248.42 MiB/s 00:08:40.391 Latency(us) 00:08:40.391 [2024-12-07T03:53:23.127Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.391 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x0 length 0xbd0b 00:08:40.391 Nvme0n1 : 5.53 104.19 6.51 0.00 0.00 1181501.44 14949.58 1246499.98 00:08:40.391 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:40.391 Nvme0n1 : 5.42 236.33 14.77 0.00 0.00 524126.38 33268.07 522182.43 00:08:40.391 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x0 length 0xa000 00:08:40.391 Nvme1n1 : 5.60 110.87 6.93 0.00 0.00 1054590.45 33689.19 1098267.55 00:08:40.391 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0xa000 length 0xa000 00:08:40.391 Nvme1n1 : 5.32 239.14 14.95 0.00 0.00 515512.57 58956.08 616512.15 00:08:40.391 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x0 length 0x8000 00:08:40.391 Nvme2n1 : 5.64 117.26 7.33 0.00 0.00 978403.52 33478.63 1111743.23 00:08:40.391 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x8000 length 0x8000 00:08:40.391 Nvme2n1 : 5.42 239.75 14.98 0.00 0.00 502854.26 90960.81 515444.59 00:08:40.391 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x0 length 0x8000 00:08:40.391 Nvme2n2 : 5.74 134.24 8.39 0.00 0.00 825769.19 26214.40 1320616.20 00:08:40.391 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x8000 length 0x8000 00:08:40.391 Nvme2n2 : 5.45 255.06 15.94 0.00 0.00 473309.05 6737.84 491862.16 00:08:40.391 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.391 Verification LBA range: start 0x0 length 0x8000 00:08:40.391 Nvme2n3 : 5.89 175.96 11.00 0.00 0.00 606710.70 6632.56 2263913.48 00:08:40.391 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.392 Verification LBA range: start 0x8000 length 0x8000 00:08:40.392 Nvme2n3 : 5.46 254.81 15.93 0.00 0.00 466149.38 7001.03 505337.83 00:08:40.392 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:40.392 Verification LBA range: start 0x0 length 0x2000 00:08:40.392 Nvme3n1 : 6.08 282.12 17.63 0.00 0.00 369249.50 661.28 2304340.51 00:08:40.392 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:40.392 Verification LBA range: start 0x2000 length 0x2000 00:08:40.392 Nvme3n1 : 5.46 258.36 16.15 0.00 0.00 452763.43 8685.49 542395.94 00:08:40.392 [2024-12-07T03:53:23.128Z] =================================================================================================================== 00:08:40.392 [2024-12-07T03:53:23.128Z] Total : 2408.08 150.51 0.00 0.00 581485.22 661.28 2304340.51 00:08:42.302 00:08:42.302 real 0m9.199s 00:08:42.302 user 0m17.188s 00:08:42.302 sys 0m0.355s 00:08:42.302 03:53:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.302 03:53:25 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:42.302 ************************************ 00:08:42.302 END TEST bdev_verify_big_io 00:08:42.302 ************************************ 00:08:42.561 03:53:25 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:42.561 03:53:25 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:42.561 03:53:25 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.561 03:53:25 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:42.561 ************************************ 00:08:42.561 START TEST bdev_write_zeroes 00:08:42.561 ************************************ 00:08:42.561 03:53:25 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:42.561 [2024-12-07 03:53:25.180523] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:42.561 [2024-12-07 03:53:25.180674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61690 ] 00:08:42.819 [2024-12-07 03:53:25.363235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.819 [2024-12-07 03:53:25.470086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.755 Running I/O for 1 seconds... 00:08:44.691 76032.00 IOPS, 297.00 MiB/s 00:08:44.691 Latency(us) 00:08:44.691 [2024-12-07T03:53:27.427Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.691 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.691 Nvme0n1 : 1.02 12555.45 49.04 0.00 0.00 10174.78 8053.82 26635.51 00:08:44.691 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.691 Nvme1n1 : 1.03 12544.12 49.00 0.00 0.00 10172.83 8369.66 26424.96 00:08:44.691 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.691 Nvme2n1 : 1.03 12533.06 48.96 0.00 0.00 10139.12 8053.82 23056.04 00:08:44.691 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.691 Nvme2n2 : 1.03 12522.78 48.92 0.00 0.00 10114.85 8053.82 23371.87 00:08:44.691 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.691 Nvme2n3 : 1.03 12512.60 48.88 0.00 0.00 10089.85 8053.82 24214.10 00:08:44.691 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:44.691 Nvme3n1 : 1.03 12502.34 48.84 0.00 0.00 10059.99 6948.40 25372.17 00:08:44.691 [2024-12-07T03:53:27.427Z] =================================================================================================================== 00:08:44.691 [2024-12-07T03:53:27.427Z] Total : 75170.36 293.63 0.00 0.00 10125.24 6948.40 26635.51 00:08:45.630 00:08:45.630 real 0m3.180s 00:08:45.630 user 0m2.793s 00:08:45.630 sys 0m0.274s 00:08:45.630 03:53:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.630 03:53:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:45.630 ************************************ 00:08:45.630 END TEST bdev_write_zeroes 00:08:45.630 ************************************ 00:08:45.630 03:53:28 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.630 03:53:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:45.630 03:53:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.630 03:53:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:45.630 ************************************ 00:08:45.630 START TEST bdev_json_nonenclosed 00:08:45.630 ************************************ 00:08:45.630 03:53:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:45.889 [2024-12-07 03:53:28.445496] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:45.889 [2024-12-07 03:53:28.445625] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61743 ] 00:08:46.149 [2024-12-07 03:53:28.627520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.149 [2024-12-07 03:53:28.731108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.149 [2024-12-07 03:53:28.731195] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:46.149 [2024-12-07 03:53:28.731215] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:46.149 [2024-12-07 03:53:28.731227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:46.409 00:08:46.409 real 0m0.623s 00:08:46.409 user 0m0.384s 00:08:46.409 sys 0m0.134s 00:08:46.409 03:53:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.409 03:53:28 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:46.409 ************************************ 00:08:46.409 END TEST bdev_json_nonenclosed 00:08:46.409 ************************************ 00:08:46.409 03:53:29 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.409 03:53:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:46.409 03:53:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.409 03:53:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.409 ************************************ 00:08:46.409 START TEST bdev_json_nonarray 00:08:46.409 ************************************ 00:08:46.409 03:53:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:46.668 [2024-12-07 03:53:29.160286] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:46.668 [2024-12-07 03:53:29.160411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61769 ] 00:08:46.668 [2024-12-07 03:53:29.346902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.928 [2024-12-07 03:53:29.456123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.928 [2024-12-07 03:53:29.456221] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:46.928 [2024-12-07 03:53:29.456243] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:46.928 [2024-12-07 03:53:29.456256] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:47.188 00:08:47.188 real 0m0.647s 00:08:47.188 user 0m0.389s 00:08:47.188 sys 0m0.153s 00:08:47.188 03:53:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.188 03:53:29 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:47.188 ************************************ 00:08:47.188 END TEST bdev_json_nonarray 00:08:47.188 ************************************ 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:47.188 03:53:29 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:47.188 00:08:47.188 real 0m42.717s 00:08:47.188 user 1m2.975s 00:08:47.188 sys 0m7.698s 00:08:47.188 03:53:29 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:47.188 03:53:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:47.188 ************************************ 00:08:47.188 END TEST blockdev_nvme 00:08:47.188 ************************************ 00:08:47.188 03:53:29 -- spdk/autotest.sh@209 -- # uname -s 00:08:47.188 03:53:29 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:47.188 03:53:29 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:47.188 03:53:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:47.188 03:53:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:47.188 03:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:47.188 ************************************ 00:08:47.188 START TEST blockdev_nvme_gpt 00:08:47.188 ************************************ 00:08:47.188 03:53:29 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:47.449 * Looking for test storage... 00:08:47.449 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:47.449 03:53:30 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.449 --rc genhtml_branch_coverage=1 00:08:47.449 --rc genhtml_function_coverage=1 00:08:47.449 --rc genhtml_legend=1 00:08:47.449 --rc geninfo_all_blocks=1 00:08:47.449 --rc geninfo_unexecuted_blocks=1 00:08:47.449 00:08:47.449 ' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.449 --rc genhtml_branch_coverage=1 00:08:47.449 --rc genhtml_function_coverage=1 00:08:47.449 --rc genhtml_legend=1 00:08:47.449 --rc geninfo_all_blocks=1 00:08:47.449 --rc geninfo_unexecuted_blocks=1 00:08:47.449 00:08:47.449 ' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.449 --rc genhtml_branch_coverage=1 00:08:47.449 --rc genhtml_function_coverage=1 00:08:47.449 --rc genhtml_legend=1 00:08:47.449 --rc geninfo_all_blocks=1 00:08:47.449 --rc geninfo_unexecuted_blocks=1 00:08:47.449 00:08:47.449 ' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:47.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:47.449 --rc genhtml_branch_coverage=1 00:08:47.449 --rc genhtml_function_coverage=1 00:08:47.449 --rc genhtml_legend=1 00:08:47.449 --rc geninfo_all_blocks=1 00:08:47.449 --rc geninfo_unexecuted_blocks=1 00:08:47.449 00:08:47.449 ' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:08:47.449 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61853 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:47.450 03:53:30 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61853 00:08:47.450 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61853 ']' 00:08:47.450 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.450 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.450 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.450 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.450 03:53:30 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.709 [2024-12-07 03:53:30.256819] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:08:47.709 [2024-12-07 03:53:30.257133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61853 ] 00:08:47.709 [2024-12-07 03:53:30.443718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.969 [2024-12-07 03:53:30.554027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.908 03:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.908 03:53:31 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:08:48.908 03:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:08:48.908 03:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:08:48.908 03:53:31 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:49.477 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:49.477 Waiting for block devices as requested 00:08:49.734 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.734 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.992 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:49.992 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:55.261 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:55.261 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:55.261 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:55.262 BYT; 00:08:55.262 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:55.262 BYT; 00:08:55.262 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:55.262 03:53:37 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:55.262 03:53:37 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:56.662 The operation has completed successfully. 00:08:56.662 03:53:38 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:57.599 The operation has completed successfully. 00:08:57.599 03:53:39 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:58.169 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:58.738 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.738 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.738 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.997 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:58.997 03:53:41 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:58.997 03:53:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.997 03:53:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:58.997 [] 00:08:58.997 03:53:41 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.997 03:53:41 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:58.997 03:53:41 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:58.997 03:53:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:58.997 03:53:41 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.997 03:53:41 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:58.997 03:53:41 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.997 03:53:41 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:08:59.567 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:08:59.567 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:08:59.568 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "461420db-3fd6-426b-81b1-9c052e3be034"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "461420db-3fd6-426b-81b1-9c052e3be034",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3dd8b431-3d6d-4c74-bcbe-104801dd86b8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3dd8b431-3d6d-4c74-bcbe-104801dd86b8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c44e119f-a072-4f11-b29b-346c584d99dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c44e119f-a072-4f11-b29b-346c584d99dd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "e8a1f5a5-cd76-4bb4-abc9-760918f25b4c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e8a1f5a5-cd76-4bb4-abc9-760918f25b4c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "cd70fcc6-659e-4621-9ee8-f5b73c83988c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cd70fcc6-659e-4621-9ee8-f5b73c83988c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:59.568 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:08:59.568 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:08:59.568 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:08:59.568 03:53:42 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 61853 00:08:59.568 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61853 ']' 00:08:59.568 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61853 00:08:59.568 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:08:59.568 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.827 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61853 00:08:59.827 killing process with pid 61853 00:08:59.828 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.828 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.828 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61853' 00:08:59.828 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61853 00:08:59.828 03:53:42 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61853 00:09:02.367 03:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:02.367 03:53:44 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:02.367 03:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:02.367 03:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.367 03:53:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:02.367 ************************************ 00:09:02.367 START TEST bdev_hello_world 00:09:02.367 ************************************ 00:09:02.367 03:53:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:09:02.367 [2024-12-07 03:53:44.700124] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:02.367 [2024-12-07 03:53:44.700449] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62502 ] 00:09:02.367 [2024-12-07 03:53:44.881918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.367 [2024-12-07 03:53:44.993626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.939 [2024-12-07 03:53:45.649155] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:09:02.939 [2024-12-07 03:53:45.649201] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:09:02.939 [2024-12-07 03:53:45.649225] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:09:02.939 [2024-12-07 03:53:45.652103] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:09:02.939 [2024-12-07 03:53:45.652659] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:09:02.939 [2024-12-07 03:53:45.652686] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:09:02.939 [2024-12-07 03:53:45.653011] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:09:02.939 00:09:02.939 [2024-12-07 03:53:45.653039] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:09:04.319 00:09:04.319 real 0m2.121s 00:09:04.319 user 0m1.741s 00:09:04.319 sys 0m0.272s 00:09:04.319 ************************************ 00:09:04.319 END TEST bdev_hello_world 00:09:04.319 ************************************ 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:04.319 03:53:46 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:09:04.319 03:53:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:04.319 03:53:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.319 03:53:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:04.319 ************************************ 00:09:04.319 START TEST bdev_bounds 00:09:04.319 ************************************ 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62544 00:09:04.319 Process bdevio pid: 62544 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62544' 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62544 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62544 ']' 00:09:04.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.319 03:53:46 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:04.319 [2024-12-07 03:53:46.901671] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:04.319 [2024-12-07 03:53:46.902260] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62544 ] 00:09:04.579 [2024-12-07 03:53:47.085377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.579 [2024-12-07 03:53:47.193838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.579 [2024-12-07 03:53:47.193988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.579 [2024-12-07 03:53:47.194017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.149 03:53:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.149 03:53:47 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:09:05.149 03:53:47 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:05.410 I/O targets: 00:09:05.410 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:09:05.410 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:09:05.410 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:09:05.410 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:05.410 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:05.410 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:05.410 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:05.410 00:09:05.410 00:09:05.410 CUnit - A unit testing framework for C - Version 2.1-3 00:09:05.410 http://cunit.sourceforge.net/ 00:09:05.410 00:09:05.410 00:09:05.410 Suite: bdevio tests on: Nvme3n1 00:09:05.410 Test: blockdev write read block ...passed 00:09:05.410 Test: blockdev write zeroes read block ...passed 00:09:05.410 Test: blockdev write zeroes read no split ...passed 00:09:05.410 Test: blockdev write zeroes read split ...passed 00:09:05.410 Test: blockdev write zeroes read split partial ...passed 00:09:05.410 Test: blockdev reset ...[2024-12-07 03:53:48.048632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:09:05.410 passed 00:09:05.410 Test: blockdev write read 8 blocks ...[2024-12-07 03:53:48.052655] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:09:05.410 passed 00:09:05.410 Test: blockdev write read size > 128k ...passed 00:09:05.410 Test: blockdev write read invalid size ...passed 00:09:05.410 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.410 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.410 Test: blockdev write read max offset ...passed 00:09:05.410 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.410 Test: blockdev writev readv 8 blocks ...passed 00:09:05.410 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.410 Test: blockdev writev readv block ...passed 00:09:05.410 Test: blockdev writev readv size > 128k ...passed 00:09:05.410 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.410 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.067795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9404000 len:0x1000 00:09:05.410 [2024-12-07 03:53:48.067911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.410 passed 00:09:05.410 Test: blockdev nvme passthru rw ...passed 00:09:05.410 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.410 Test: blockdev nvme admin passthru ...[2024-12-07 03:53:48.069359] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.410 [2024-12-07 03:53:48.069512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.410 passed 00:09:05.410 Test: blockdev copy ...passed 00:09:05.410 Suite: bdevio tests on: Nvme2n3 00:09:05.410 Test: blockdev write read block ...passed 00:09:05.410 Test: blockdev write zeroes read block ...passed 00:09:05.410 Test: blockdev write zeroes read no split ...passed 00:09:05.410 Test: blockdev write zeroes read split ...passed 00:09:05.670 Test: blockdev write zeroes read split partial ...passed 00:09:05.670 Test: blockdev reset ...[2024-12-07 03:53:48.155923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:05.670 [2024-12-07 03:53:48.161384] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:05.670 Test: blockdev write read 8 blocks ...uccessful. 00:09:05.670 passed 00:09:05.670 Test: blockdev write read size > 128k ...passed 00:09:05.670 Test: blockdev write read invalid size ...passed 00:09:05.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.670 Test: blockdev write read max offset ...passed 00:09:05.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.670 Test: blockdev writev readv 8 blocks ...passed 00:09:05.670 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.670 Test: blockdev writev readv block ...passed 00:09:05.670 Test: blockdev writev readv size > 128k ...passed 00:09:05.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.670 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.172689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9402000 len:0x1000 00:09:05.670 [2024-12-07 03:53:48.172875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.670 passed 00:09:05.670 Test: blockdev nvme passthru rw ...passed 00:09:05.670 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:53:48.174305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.670 [2024-12-07 03:53:48.174467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.670 passed 00:09:05.670 Test: blockdev nvme admin passthru ...passed 00:09:05.670 Test: blockdev copy ...passed 00:09:05.670 Suite: bdevio tests on: Nvme2n2 00:09:05.670 Test: blockdev write read block ...passed 00:09:05.670 Test: blockdev write zeroes read block ...passed 00:09:05.670 Test: blockdev write zeroes read no split ...passed 00:09:05.670 Test: blockdev write zeroes read split ...passed 00:09:05.670 Test: blockdev write zeroes read split partial ...passed 00:09:05.670 Test: blockdev reset ...[2024-12-07 03:53:48.249056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:05.670 [2024-12-07 03:53:48.254009] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:09:05.670 Test: blockdev write read 8 blocks ...uccessful. 00:09:05.670 passed 00:09:05.670 Test: blockdev write read size > 128k ...passed 00:09:05.670 Test: blockdev write read invalid size ...passed 00:09:05.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.670 Test: blockdev write read max offset ...passed 00:09:05.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.670 Test: blockdev writev readv 8 blocks ...passed 00:09:05.670 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.670 Test: blockdev writev readv block ...passed 00:09:05.670 Test: blockdev writev readv size > 128k ...passed 00:09:05.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.670 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.266014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cda38000 len:0x1000 00:09:05.670 [2024-12-07 03:53:48.266180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.670 passed 00:09:05.670 Test: blockdev nvme passthru rw ...passed 00:09:05.670 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:53:48.267591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.670 [2024-12-07 03:53:48.267738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed sqhd:001c p:1 m:0 dnr:1 00:09:05.670 00:09:05.670 Test: blockdev nvme admin passthru ...passed 00:09:05.670 Test: blockdev copy ...passed 00:09:05.670 Suite: bdevio tests on: Nvme2n1 00:09:05.670 Test: blockdev write read block ...passed 00:09:05.670 Test: blockdev write zeroes read block ...passed 00:09:05.670 Test: blockdev write zeroes read no split ...passed 00:09:05.670 Test: blockdev write zeroes read split ...passed 00:09:05.670 Test: blockdev write zeroes read split partial ...passed 00:09:05.670 Test: blockdev reset ...[2024-12-07 03:53:48.341178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:09:05.670 passed 00:09:05.670 Test: blockdev write read 8 blocks ...[2024-12-07 03:53:48.345891] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:09:05.670 passed 00:09:05.670 Test: blockdev write read size > 128k ...passed 00:09:05.670 Test: blockdev write read invalid size ...passed 00:09:05.670 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.670 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.670 Test: blockdev write read max offset ...passed 00:09:05.670 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.670 Test: blockdev writev readv 8 blocks ...passed 00:09:05.670 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.670 Test: blockdev writev readv block ...passed 00:09:05.670 Test: blockdev writev readv size > 128k ...passed 00:09:05.670 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.670 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.356265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cda34000 len:0x1000 00:09:05.670 [2024-12-07 03:53:48.356315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.670 passed 00:09:05.670 Test: blockdev nvme passthru rw ...passed 00:09:05.670 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.670 Test: blockdev nvme admin passthru ...[2024-12-07 03:53:48.357197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:05.670 [2024-12-07 03:53:48.357235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:05.670 passed 00:09:05.670 Test: blockdev copy ...passed 00:09:05.670 Suite: bdevio tests on: Nvme1n1p2 00:09:05.670 Test: blockdev write read block ...passed 00:09:05.670 Test: blockdev write zeroes read block ...passed 00:09:05.670 Test: blockdev write zeroes read no split ...passed 00:09:05.931 Test: blockdev write zeroes read split ...passed 00:09:05.931 Test: blockdev write zeroes read split partial ...passed 00:09:05.931 Test: blockdev reset ...[2024-12-07 03:53:48.435624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:05.931 passed 00:09:05.931 Test: blockdev write read 8 blocks ...[2024-12-07 03:53:48.440451] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:09:05.931 passed 00:09:05.931 Test: blockdev write read size > 128k ...passed 00:09:05.931 Test: blockdev write read invalid size ...passed 00:09:05.931 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.931 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.931 Test: blockdev write read max offset ...passed 00:09:05.931 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.931 Test: blockdev writev readv 8 blocks ...passed 00:09:05.931 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.931 Test: blockdev writev readv block ...passed 00:09:05.931 Test: blockdev writev readv size > 128k ...passed 00:09:05.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.931 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.449957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 lpassed 00:09:05.931 Test: blockdev nvme passthru rw ...passed 00:09:05.931 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.931 Test: blockdev nvme admin passthru ...passed 00:09:05.931 Test: blockdev copy ...en:1 SGL DATA BLOCK ADDRESS 0x2cda30000 len:0x1000 00:09:05.931 [2024-12-07 03:53:48.450131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.931 passed 00:09:05.931 Suite: bdevio tests on: Nvme1n1p1 00:09:05.931 Test: blockdev write read block ...passed 00:09:05.931 Test: blockdev write zeroes read block ...passed 00:09:05.931 Test: blockdev write zeroes read no split ...passed 00:09:05.931 Test: blockdev write zeroes read split ...passed 00:09:05.931 Test: blockdev write zeroes read split partial ...passed 00:09:05.931 Test: blockdev reset ...[2024-12-07 03:53:48.518255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:09:05.931 [2024-12-07 03:53:48.522736] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spasseduccessful. 00:09:05.931 00:09:05.931 Test: blockdev write read 8 blocks ...passed 00:09:05.931 Test: blockdev write read size > 128k ...passed 00:09:05.931 Test: blockdev write read invalid size ...passed 00:09:05.931 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.931 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.931 Test: blockdev write read max offset ...passed 00:09:05.931 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.931 Test: blockdev writev readv 8 blocks ...passed 00:09:05.931 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.931 Test: blockdev writev readv block ...passed 00:09:05.931 Test: blockdev writev readv size > 128k ...passed 00:09:05.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.931 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.533051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b9e0e000 len:0x1000 00:09:05.931 [2024-12-07 03:53:48.533095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:05.931 passed 00:09:05.931 Test: blockdev nvme passthru rw ...passed 00:09:05.931 Test: blockdev nvme passthru vendor specific ...passed 00:09:05.931 Test: blockdev nvme admin passthru ...passed 00:09:05.931 Test: blockdev copy ...passed 00:09:05.931 Suite: bdevio tests on: Nvme0n1 00:09:05.931 Test: blockdev write read block ...passed 00:09:05.931 Test: blockdev write zeroes read block ...passed 00:09:05.931 Test: blockdev write zeroes read no split ...passed 00:09:05.931 Test: blockdev write zeroes read split ...passed 00:09:05.931 Test: blockdev write zeroes read split partial ...passed 00:09:05.931 Test: blockdev reset ...[2024-12-07 03:53:48.603406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:05.931 [2024-12-07 03:53:48.608081] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:09:05.931 Test: blockdev write read 8 blocks ...uccessful. 00:09:05.931 passed 00:09:05.931 Test: blockdev write read size > 128k ...passed 00:09:05.931 Test: blockdev write read invalid size ...passed 00:09:05.931 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:05.931 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:05.931 Test: blockdev write read max offset ...passed 00:09:05.931 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:05.931 Test: blockdev writev readv 8 blocks ...passed 00:09:05.931 Test: blockdev writev readv 30 x 1block ...passed 00:09:05.931 Test: blockdev writev readv block ...passed 00:09:05.931 Test: blockdev writev readv size > 128k ...passed 00:09:05.931 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:05.931 Test: blockdev comparev and writev ...[2024-12-07 03:53:48.618486] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:09:05.931 separate metadata which is not supported yet. 00:09:05.931 passed 00:09:05.931 Test: blockdev nvme passthru rw ...passed 00:09:05.931 Test: blockdev nvme passthru vendor specific ...[2024-12-07 03:53:48.619448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:09:05.931 [2024-12-07 03:53:48.619613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed sqhd:0017 p:1 m:0 dnr:1 00:09:05.931 00:09:05.931 Test: blockdev nvme admin passthru ...passed 00:09:05.931 Test: blockdev copy ...passed 00:09:05.931 00:09:05.931 Run Summary: Type Total Ran Passed Failed Inactive 00:09:05.931 suites 7 7 n/a 0 0 00:09:05.931 tests 161 161 161 0 0 00:09:05.931 asserts 1025 1025 1025 0 n/a 00:09:05.931 00:09:05.931 Elapsed time = 1.734 seconds 00:09:05.931 0 00:09:05.931 03:53:48 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62544 00:09:05.931 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62544 ']' 00:09:05.931 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62544 00:09:05.931 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:09:05.931 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.931 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62544 00:09:06.192 killing process with pid 62544 00:09:06.192 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:06.192 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:06.192 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62544' 00:09:06.192 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62544 00:09:06.192 03:53:48 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62544 00:09:07.131 03:53:49 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:09:07.131 00:09:07.131 real 0m2.893s 00:09:07.131 user 0m7.341s 00:09:07.131 sys 0m0.443s 00:09:07.131 03:53:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.131 ************************************ 00:09:07.131 END TEST bdev_bounds 00:09:07.131 ************************************ 00:09:07.131 03:53:49 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:09:07.131 03:53:49 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:07.131 03:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:07.131 03:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.132 03:53:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:07.132 ************************************ 00:09:07.132 START TEST bdev_nbd 00:09:07.132 ************************************ 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62609 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62609 /var/tmp/spdk-nbd.sock 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62609 ']' 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:07.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.132 03:53:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:07.391 [2024-12-07 03:53:49.896157] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:07.391 [2024-12-07 03:53:49.896297] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:07.391 [2024-12-07 03:53:50.083045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.652 [2024-12-07 03:53:50.192214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.222 03:53:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.481 1+0 records in 00:09:08.481 1+0 records out 00:09:08.481 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000937382 s, 4.4 MB/s 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.481 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:08.740 1+0 records in 00:09:08.740 1+0 records out 00:09:08.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000821491 s, 5.0 MB/s 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:08.740 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.000 1+0 records in 00:09:09.000 1+0 records out 00:09:09.000 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00521077 s, 786 kB/s 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.000 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.260 1+0 records in 00:09:09.260 1+0 records out 00:09:09.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153355 s, 2.7 MB/s 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:09.260 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.261 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.261 03:53:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:09.261 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.261 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.261 03:53:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.520 1+0 records in 00:09:09.520 1+0 records out 00:09:09.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000816198 s, 5.0 MB/s 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.520 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:09.780 1+0 records in 00:09:09.780 1+0 records out 00:09:09.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00263495 s, 1.6 MB/s 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:09.780 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:10.040 1+0 records in 00:09:10.040 1+0 records out 00:09:10.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000675897 s, 6.1 MB/s 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:10.040 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd0", 00:09:10.300 "bdev_name": "Nvme0n1" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd1", 00:09:10.300 "bdev_name": "Nvme1n1p1" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd2", 00:09:10.300 "bdev_name": "Nvme1n1p2" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd3", 00:09:10.300 "bdev_name": "Nvme2n1" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd4", 00:09:10.300 "bdev_name": "Nvme2n2" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd5", 00:09:10.300 "bdev_name": "Nvme2n3" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd6", 00:09:10.300 "bdev_name": "Nvme3n1" 00:09:10.300 } 00:09:10.300 ]' 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd0", 00:09:10.300 "bdev_name": "Nvme0n1" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd1", 00:09:10.300 "bdev_name": "Nvme1n1p1" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd2", 00:09:10.300 "bdev_name": "Nvme1n1p2" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd3", 00:09:10.300 "bdev_name": "Nvme2n1" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd4", 00:09:10.300 "bdev_name": "Nvme2n2" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd5", 00:09:10.300 "bdev_name": "Nvme2n3" 00:09:10.300 }, 00:09:10.300 { 00:09:10.300 "nbd_device": "/dev/nbd6", 00:09:10.300 "bdev_name": "Nvme3n1" 00:09:10.300 } 00:09:10.300 ]' 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.300 03:53:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.559 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.837 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.096 03:53:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.355 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:11.614 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:11.872 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:11.872 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:11.872 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:11.872 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:11.872 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:11.873 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:11.873 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:11.873 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:11.873 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:11.873 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:11.873 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.154 03:53:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:09:12.457 /dev/nbd0 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.457 1+0 records in 00:09:12.457 1+0 records out 00:09:12.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000727257 s, 5.6 MB/s 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.457 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:09:12.747 /dev/nbd1 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:12.747 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:12.748 1+0 records in 00:09:12.748 1+0 records out 00:09:12.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535587 s, 7.6 MB/s 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:12.748 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:09:13.031 /dev/nbd10 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:13.031 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.032 1+0 records in 00:09:13.032 1+0 records out 00:09:13.032 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000787372 s, 5.2 MB/s 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.032 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:13.032 /dev/nbd11 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.289 1+0 records in 00:09:13.289 1+0 records out 00:09:13.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000737602 s, 5.6 MB/s 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.289 03:53:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:13.289 /dev/nbd12 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.548 1+0 records in 00:09:13.548 1+0 records out 00:09:13.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605954 s, 6.8 MB/s 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.548 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:13.548 /dev/nbd13 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:13.807 1+0 records in 00:09:13.807 1+0 records out 00:09:13.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000911884 s, 4.5 MB/s 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:13.807 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:13.807 /dev/nbd14 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:14.066 1+0 records in 00:09:14.066 1+0 records out 00:09:14.066 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000825771 s, 5.0 MB/s 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:14.066 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd0", 00:09:14.067 "bdev_name": "Nvme0n1" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd1", 00:09:14.067 "bdev_name": "Nvme1n1p1" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd10", 00:09:14.067 "bdev_name": "Nvme1n1p2" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd11", 00:09:14.067 "bdev_name": "Nvme2n1" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd12", 00:09:14.067 "bdev_name": "Nvme2n2" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd13", 00:09:14.067 "bdev_name": "Nvme2n3" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd14", 00:09:14.067 "bdev_name": "Nvme3n1" 00:09:14.067 } 00:09:14.067 ]' 00:09:14.067 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:14.067 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd0", 00:09:14.067 "bdev_name": "Nvme0n1" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd1", 00:09:14.067 "bdev_name": "Nvme1n1p1" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd10", 00:09:14.067 "bdev_name": "Nvme1n1p2" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd11", 00:09:14.067 "bdev_name": "Nvme2n1" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd12", 00:09:14.067 "bdev_name": "Nvme2n2" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd13", 00:09:14.067 "bdev_name": "Nvme2n3" 00:09:14.067 }, 00:09:14.067 { 00:09:14.067 "nbd_device": "/dev/nbd14", 00:09:14.067 "bdev_name": "Nvme3n1" 00:09:14.067 } 00:09:14.067 ]' 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:14.325 /dev/nbd1 00:09:14.325 /dev/nbd10 00:09:14.325 /dev/nbd11 00:09:14.325 /dev/nbd12 00:09:14.325 /dev/nbd13 00:09:14.325 /dev/nbd14' 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:14.325 /dev/nbd1 00:09:14.325 /dev/nbd10 00:09:14.325 /dev/nbd11 00:09:14.325 /dev/nbd12 00:09:14.325 /dev/nbd13 00:09:14.325 /dev/nbd14' 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:14.325 256+0 records in 00:09:14.325 256+0 records out 00:09:14.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0126039 s, 83.2 MB/s 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.325 03:53:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:14.325 256+0 records in 00:09:14.325 256+0 records out 00:09:14.325 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.146776 s, 7.1 MB/s 00:09:14.325 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.325 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:14.583 256+0 records in 00:09:14.583 256+0 records out 00:09:14.583 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.161956 s, 6.5 MB/s 00:09:14.583 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.583 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:14.842 256+0 records in 00:09:14.843 256+0 records out 00:09:14.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153032 s, 6.9 MB/s 00:09:14.843 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.843 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:14.843 256+0 records in 00:09:14.843 256+0 records out 00:09:14.843 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.15709 s, 6.7 MB/s 00:09:14.843 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:14.843 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:15.101 256+0 records in 00:09:15.101 256+0 records out 00:09:15.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152185 s, 6.9 MB/s 00:09:15.101 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.101 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:15.101 256+0 records in 00:09:15.101 256+0 records out 00:09:15.101 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152404 s, 6.9 MB/s 00:09:15.101 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:15.101 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:15.361 256+0 records in 00:09:15.361 256+0 records out 00:09:15.361 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.153889 s, 6.8 MB/s 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.361 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.621 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:15.880 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.137 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.395 03:53:58 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:16.653 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.654 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.654 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:16.654 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:16.913 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:09:17.172 03:53:59 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:17.432 malloc_lvol_verify 00:09:17.432 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:17.691 bebf3be0-cac1-44f5-b095-b04f09daadbd 00:09:17.691 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:17.951 10b3a064-dcd5-4110-ba34-c36163dcba31 00:09:17.951 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:17.951 /dev/nbd0 00:09:18.209 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:09:18.209 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:09:18.209 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:09:18.209 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:09:18.209 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:09:18.209 mke2fs 1.47.0 (5-Feb-2023) 00:09:18.209 Discarding device blocks: 0/4096 done 00:09:18.209 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:18.209 00:09:18.209 Allocating group tables: 0/1 done 00:09:18.209 Writing inode tables: 0/1 done 00:09:18.209 Creating journal (1024 blocks): done 00:09:18.209 Writing superblocks and filesystem accounting information: 0/1 done 00:09:18.209 00:09:18.209 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:18.210 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62609 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62609 ']' 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62609 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62609 00:09:18.469 killing process with pid 62609 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62609' 00:09:18.469 03:54:00 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62609 00:09:18.469 03:54:01 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62609 00:09:19.847 ************************************ 00:09:19.847 END TEST bdev_nbd 00:09:19.847 ************************************ 00:09:19.847 03:54:02 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:09:19.847 00:09:19.847 real 0m12.401s 00:09:19.847 user 0m15.681s 00:09:19.847 sys 0m5.457s 00:09:19.847 03:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:19.847 03:54:02 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:09:19.847 03:54:02 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:09:19.847 03:54:02 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:09:19.847 03:54:02 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:09:19.847 skipping fio tests on NVMe due to multi-ns failures. 00:09:19.847 03:54:02 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:19.847 03:54:02 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:19.847 03:54:02 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:19.847 03:54:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:19.847 03:54:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:19.847 03:54:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:19.847 ************************************ 00:09:19.847 START TEST bdev_verify 00:09:19.847 ************************************ 00:09:19.848 03:54:02 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:19.848 [2024-12-07 03:54:02.371937] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:19.848 [2024-12-07 03:54:02.372084] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63031 ] 00:09:19.848 [2024-12-07 03:54:02.555152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:20.107 [2024-12-07 03:54:02.670691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.107 [2024-12-07 03:54:02.670715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.674 Running I/O for 5 seconds... 00:09:22.987 17472.00 IOPS, 68.25 MiB/s [2024-12-07T03:54:07.101Z] 17856.00 IOPS, 69.75 MiB/s [2024-12-07T03:54:07.669Z] 17472.00 IOPS, 68.25 MiB/s [2024-12-07T03:54:08.606Z] 17152.00 IOPS, 67.00 MiB/s [2024-12-07T03:54:08.606Z] 17292.80 IOPS, 67.55 MiB/s 00:09:25.870 Latency(us) 00:09:25.870 [2024-12-07T03:54:08.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.870 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0xbd0bd 00:09:25.870 Nvme0n1 : 5.09 1042.63 4.07 0.00 0.00 122001.96 10685.79 112016.55 00:09:25.870 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:09:25.870 Nvme0n1 : 5.04 1370.64 5.35 0.00 0.00 93047.47 22003.25 97277.53 00:09:25.870 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0x4ff80 00:09:25.870 Nvme1n1p1 : 5.10 1042.38 4.07 0.00 0.00 121814.20 10369.95 101488.68 00:09:25.870 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x4ff80 length 0x4ff80 00:09:25.870 Nvme1n1p1 : 5.07 1376.49 5.38 0.00 0.00 92533.61 9106.61 90960.81 00:09:25.870 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0x4ff7f 00:09:25.870 Nvme1n1p2 : 5.11 1052.22 4.11 0.00 0.00 120956.93 10896.35 101067.57 00:09:25.870 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:09:25.870 Nvme1n1p2 : 5.07 1376.02 5.38 0.00 0.00 92333.42 9053.97 85486.32 00:09:25.870 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0x80000 00:09:25.870 Nvme2n1 : 5.11 1051.98 4.11 0.00 0.00 120710.51 10896.35 101067.57 00:09:25.870 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x80000 length 0x80000 00:09:25.870 Nvme2n1 : 5.07 1375.70 5.37 0.00 0.00 92223.87 9422.44 83380.74 00:09:25.870 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0x80000 00:09:25.870 Nvme2n2 : 5.11 1051.75 4.11 0.00 0.00 120439.36 10791.07 103173.14 00:09:25.870 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x80000 length 0x80000 00:09:25.870 Nvme2n2 : 5.08 1385.45 5.41 0.00 0.00 91663.13 8159.10 80432.94 00:09:25.870 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0x80000 00:09:25.870 Nvme2n3 : 5.11 1051.50 4.11 0.00 0.00 120261.19 10791.07 106120.94 00:09:25.870 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x80000 length 0x80000 00:09:25.870 Nvme2n3 : 5.08 1385.12 5.41 0.00 0.00 91574.18 8053.82 86749.66 00:09:25.870 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x0 length 0x20000 00:09:25.870 Nvme3n1 : 5.11 1051.28 4.11 0.00 0.00 120158.46 10633.15 110332.09 00:09:25.870 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:25.870 Verification LBA range: start 0x20000 length 0x20000 00:09:25.870 Nvme3n1 : 5.08 1384.79 5.41 0.00 0.00 91477.09 8369.66 97698.65 00:09:25.870 [2024-12-07T03:54:08.606Z] =================================================================================================================== 00:09:25.870 [2024-12-07T03:54:08.606Z] Total : 16997.94 66.40 0.00 0.00 104603.69 8053.82 112016.55 00:09:27.249 00:09:27.249 real 0m7.570s 00:09:27.249 user 0m13.910s 00:09:27.249 sys 0m0.366s 00:09:27.249 03:54:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:27.250 ************************************ 00:09:27.250 03:54:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:27.250 END TEST bdev_verify 00:09:27.250 ************************************ 00:09:27.250 03:54:09 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:27.250 03:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:09:27.250 03:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:27.250 03:54:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:27.250 ************************************ 00:09:27.250 START TEST bdev_verify_big_io 00:09:27.250 ************************************ 00:09:27.250 03:54:09 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:27.509 [2024-12-07 03:54:10.014136] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:27.509 [2024-12-07 03:54:10.014257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63136 ] 00:09:27.509 [2024-12-07 03:54:10.197448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.769 [2024-12-07 03:54:10.304099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.769 [2024-12-07 03:54:10.304123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.707 Running I/O for 5 seconds... 00:09:33.898 2211.00 IOPS, 138.19 MiB/s [2024-12-07T03:54:17.574Z] 3944.00 IOPS, 246.50 MiB/s [2024-12-07T03:54:17.574Z] 4448.67 IOPS, 278.04 MiB/s 00:09:34.838 Latency(us) 00:09:34.838 [2024-12-07T03:54:17.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.838 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0xbd0b 00:09:34.838 Nvme0n1 : 5.63 91.96 5.75 0.00 0.00 1342576.36 17897.38 1529489.17 00:09:34.838 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0xbd0b length 0xbd0b 00:09:34.838 Nvme0n1 : 5.46 203.24 12.70 0.00 0.00 609368.00 19897.68 700735.13 00:09:34.838 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0x4ff8 00:09:34.838 Nvme1n1p1 : 5.63 94.24 5.89 0.00 0.00 1240706.09 52218.24 1239762.15 00:09:34.838 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x4ff8 length 0x4ff8 00:09:34.838 Nvme1n1p1 : 5.52 208.84 13.05 0.00 0.00 586243.47 52007.69 596298.64 00:09:34.838 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0x4ff7 00:09:34.838 Nvme1n1p2 : 5.76 104.60 6.54 0.00 0.00 1076210.79 38742.57 1111743.23 00:09:34.838 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x4ff7 length 0x4ff7 00:09:34.838 Nvme1n1p2 : 5.47 210.33 13.15 0.00 0.00 578104.65 89276.35 532289.18 00:09:34.838 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0x8000 00:09:34.838 Nvme2n1 : 5.85 112.59 7.04 0.00 0.00 967474.81 37058.11 1994399.97 00:09:34.838 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x8000 length 0x8000 00:09:34.838 Nvme2n1 : 5.47 210.60 13.16 0.00 0.00 568414.95 89276.35 582822.97 00:09:34.838 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0x8000 00:09:34.838 Nvme2n2 : 6.01 145.85 9.12 0.00 0.00 721896.27 20529.35 2304340.51 00:09:34.838 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x8000 length 0x8000 00:09:34.838 Nvme2n2 : 5.54 219.41 13.71 0.00 0.00 540165.35 17160.43 596298.64 00:09:34.838 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0x8000 00:09:34.838 Nvme2n3 : 6.20 197.34 12.33 0.00 0.00 519309.47 7369.51 1664245.92 00:09:34.838 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x8000 length 0x8000 00:09:34.838 Nvme2n3 : 5.56 225.09 14.07 0.00 0.00 519298.53 11896.49 606405.40 00:09:34.838 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x0 length 0x2000 00:09:34.838 Nvme3n1 : 6.32 260.75 16.30 0.00 0.00 380728.57 654.70 2129156.73 00:09:34.838 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:34.838 Verification LBA range: start 0x2000 length 0x2000 00:09:34.838 Nvme3n1 : 5.57 229.97 14.37 0.00 0.00 500778.68 4816.50 616512.15 00:09:34.838 [2024-12-07T03:54:17.574Z] =================================================================================================================== 00:09:34.838 [2024-12-07T03:54:17.574Z] Total : 2514.80 157.18 0.00 0.00 637070.81 654.70 2304340.51 00:09:36.827 00:09:36.827 real 0m9.524s 00:09:36.827 user 0m17.848s 00:09:36.827 sys 0m0.357s 00:09:36.827 03:54:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.827 03:54:19 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:36.827 ************************************ 00:09:36.827 END TEST bdev_verify_big_io 00:09:36.827 ************************************ 00:09:36.827 03:54:19 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:36.827 03:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:36.827 03:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.827 03:54:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:36.827 ************************************ 00:09:36.827 START TEST bdev_write_zeroes 00:09:36.827 ************************************ 00:09:36.827 03:54:19 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:37.144 [2024-12-07 03:54:19.630322] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:37.144 [2024-12-07 03:54:19.630465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63256 ] 00:09:37.144 [2024-12-07 03:54:19.814050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.425 [2024-12-07 03:54:19.924842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.990 Running I/O for 1 seconds... 00:09:38.923 71680.00 IOPS, 280.00 MiB/s 00:09:38.923 Latency(us) 00:09:38.923 [2024-12-07T03:54:21.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.923 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme0n1 : 1.02 10225.57 39.94 0.00 0.00 12492.75 10369.95 26740.79 00:09:38.923 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme1n1p1 : 1.02 10213.90 39.90 0.00 0.00 12490.32 10106.76 26951.35 00:09:38.923 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme1n1p2 : 1.02 10204.73 39.86 0.00 0.00 12473.29 10159.40 26424.96 00:09:38.923 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme2n1 : 1.02 10196.38 39.83 0.00 0.00 12421.48 10422.59 22424.37 00:09:38.923 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme2n2 : 1.02 10187.59 39.80 0.00 0.00 12413.98 10369.95 22424.37 00:09:38.923 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme2n3 : 1.02 10178.00 39.76 0.00 0.00 12388.03 9685.64 20213.51 00:09:38.923 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:38.923 Nvme3n1 : 1.03 10168.62 39.72 0.00 0.00 12356.67 8474.94 21687.42 00:09:38.923 [2024-12-07T03:54:21.659Z] =================================================================================================================== 00:09:38.923 [2024-12-07T03:54:21.659Z] Total : 71374.79 278.81 0.00 0.00 12433.79 8474.94 26951.35 00:09:40.303 00:09:40.303 real 0m3.223s 00:09:40.303 user 0m2.821s 00:09:40.303 sys 0m0.289s 00:09:40.303 03:54:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.303 03:54:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:40.303 ************************************ 00:09:40.303 END TEST bdev_write_zeroes 00:09:40.303 ************************************ 00:09:40.303 03:54:22 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:40.303 03:54:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:40.303 03:54:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.303 03:54:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:40.303 ************************************ 00:09:40.303 START TEST bdev_json_nonenclosed 00:09:40.303 ************************************ 00:09:40.303 03:54:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:40.303 [2024-12-07 03:54:22.923858] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:40.303 [2024-12-07 03:54:22.923987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63309 ] 00:09:40.563 [2024-12-07 03:54:23.104711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.563 [2024-12-07 03:54:23.213424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.563 [2024-12-07 03:54:23.213511] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:40.563 [2024-12-07 03:54:23.213533] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:40.563 [2024-12-07 03:54:23.213544] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:40.823 00:09:40.823 real 0m0.621s 00:09:40.823 user 0m0.374s 00:09:40.823 sys 0m0.143s 00:09:40.823 03:54:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:40.823 03:54:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:40.823 ************************************ 00:09:40.823 END TEST bdev_json_nonenclosed 00:09:40.823 ************************************ 00:09:40.823 03:54:23 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:40.823 03:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:09:40.823 03:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.823 03:54:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:40.823 ************************************ 00:09:40.823 START TEST bdev_json_nonarray 00:09:40.823 ************************************ 00:09:40.823 03:54:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:41.083 [2024-12-07 03:54:23.620410] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:41.083 [2024-12-07 03:54:23.620526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63340 ] 00:09:41.083 [2024-12-07 03:54:23.798483] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.343 [2024-12-07 03:54:23.905044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.343 [2024-12-07 03:54:23.905156] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:41.343 [2024-12-07 03:54:23.905180] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:41.343 [2024-12-07 03:54:23.905193] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:41.603 00:09:41.603 real 0m0.623s 00:09:41.603 user 0m0.379s 00:09:41.603 sys 0m0.139s 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:41.603 ************************************ 00:09:41.603 END TEST bdev_json_nonarray 00:09:41.603 ************************************ 00:09:41.603 03:54:24 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:09:41.603 03:54:24 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:09:41.603 03:54:24 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:41.603 03:54:24 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:41.603 03:54:24 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.603 03:54:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:41.603 ************************************ 00:09:41.603 START TEST bdev_gpt_uuid 00:09:41.603 ************************************ 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63365 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63365 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63365 ']' 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.603 03:54:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:41.862 [2024-12-07 03:54:24.347161] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:41.862 [2024-12-07 03:54:24.347283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63365 ] 00:09:41.862 [2024-12-07 03:54:24.528442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.122 [2024-12-07 03:54:24.638093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.060 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.060 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:09:43.060 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:43.060 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.060 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:43.319 Some configs were skipped because the RPC state that can call them passed over. 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:09:43.319 { 00:09:43.319 "name": "Nvme1n1p1", 00:09:43.319 "aliases": [ 00:09:43.319 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:43.319 ], 00:09:43.319 "product_name": "GPT Disk", 00:09:43.319 "block_size": 4096, 00:09:43.319 "num_blocks": 655104, 00:09:43.319 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:43.319 "assigned_rate_limits": { 00:09:43.319 "rw_ios_per_sec": 0, 00:09:43.319 "rw_mbytes_per_sec": 0, 00:09:43.319 "r_mbytes_per_sec": 0, 00:09:43.319 "w_mbytes_per_sec": 0 00:09:43.319 }, 00:09:43.319 "claimed": false, 00:09:43.319 "zoned": false, 00:09:43.319 "supported_io_types": { 00:09:43.319 "read": true, 00:09:43.319 "write": true, 00:09:43.319 "unmap": true, 00:09:43.319 "flush": true, 00:09:43.319 "reset": true, 00:09:43.319 "nvme_admin": false, 00:09:43.319 "nvme_io": false, 00:09:43.319 "nvme_io_md": false, 00:09:43.319 "write_zeroes": true, 00:09:43.319 "zcopy": false, 00:09:43.319 "get_zone_info": false, 00:09:43.319 "zone_management": false, 00:09:43.319 "zone_append": false, 00:09:43.319 "compare": true, 00:09:43.319 "compare_and_write": false, 00:09:43.319 "abort": true, 00:09:43.319 "seek_hole": false, 00:09:43.319 "seek_data": false, 00:09:43.319 "copy": true, 00:09:43.319 "nvme_iov_md": false 00:09:43.319 }, 00:09:43.319 "driver_specific": { 00:09:43.319 "gpt": { 00:09:43.319 "base_bdev": "Nvme1n1", 00:09:43.319 "offset_blocks": 256, 00:09:43.319 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:43.319 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:43.319 "partition_name": "SPDK_TEST_first" 00:09:43.319 } 00:09:43.319 } 00:09:43.319 } 00:09:43.319 ]' 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.319 03:54:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:43.319 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.319 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:09:43.319 { 00:09:43.319 "name": "Nvme1n1p2", 00:09:43.319 "aliases": [ 00:09:43.319 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:43.319 ], 00:09:43.319 "product_name": "GPT Disk", 00:09:43.319 "block_size": 4096, 00:09:43.319 "num_blocks": 655103, 00:09:43.319 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:43.319 "assigned_rate_limits": { 00:09:43.319 "rw_ios_per_sec": 0, 00:09:43.319 "rw_mbytes_per_sec": 0, 00:09:43.319 "r_mbytes_per_sec": 0, 00:09:43.319 "w_mbytes_per_sec": 0 00:09:43.319 }, 00:09:43.319 "claimed": false, 00:09:43.319 "zoned": false, 00:09:43.319 "supported_io_types": { 00:09:43.319 "read": true, 00:09:43.319 "write": true, 00:09:43.319 "unmap": true, 00:09:43.319 "flush": true, 00:09:43.319 "reset": true, 00:09:43.319 "nvme_admin": false, 00:09:43.319 "nvme_io": false, 00:09:43.319 "nvme_io_md": false, 00:09:43.319 "write_zeroes": true, 00:09:43.319 "zcopy": false, 00:09:43.319 "get_zone_info": false, 00:09:43.319 "zone_management": false, 00:09:43.319 "zone_append": false, 00:09:43.319 "compare": true, 00:09:43.319 "compare_and_write": false, 00:09:43.319 "abort": true, 00:09:43.319 "seek_hole": false, 00:09:43.319 "seek_data": false, 00:09:43.319 "copy": true, 00:09:43.319 "nvme_iov_md": false 00:09:43.319 }, 00:09:43.319 "driver_specific": { 00:09:43.319 "gpt": { 00:09:43.319 "base_bdev": "Nvme1n1", 00:09:43.319 "offset_blocks": 655360, 00:09:43.319 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:43.319 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:43.319 "partition_name": "SPDK_TEST_second" 00:09:43.319 } 00:09:43.319 } 00:09:43.319 } 00:09:43.319 ]' 00:09:43.319 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63365 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63365 ']' 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63365 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63365 00:09:43.578 killing process with pid 63365 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63365' 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63365 00:09:43.578 03:54:26 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63365 00:09:46.111 ************************************ 00:09:46.111 END TEST bdev_gpt_uuid 00:09:46.111 ************************************ 00:09:46.111 00:09:46.111 real 0m4.190s 00:09:46.111 user 0m4.270s 00:09:46.111 sys 0m0.535s 00:09:46.111 03:54:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.111 03:54:28 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:46.111 03:54:28 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:46.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.938 Waiting for block devices as requested 00:09:46.939 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.939 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.939 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:47.198 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:52.472 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:52.472 03:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:52.472 03:54:34 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:52.472 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:52.472 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:52.472 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:52.472 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:52.472 03:54:35 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:52.472 00:09:52.472 real 1m5.292s 00:09:52.472 user 1m20.653s 00:09:52.472 sys 0m12.698s 00:09:52.472 03:54:35 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.472 03:54:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:52.472 ************************************ 00:09:52.472 END TEST blockdev_nvme_gpt 00:09:52.472 ************************************ 00:09:52.731 03:54:35 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:52.731 03:54:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.731 03:54:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.731 03:54:35 -- common/autotest_common.sh@10 -- # set +x 00:09:52.731 ************************************ 00:09:52.731 START TEST nvme 00:09:52.731 ************************************ 00:09:52.731 03:54:35 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:52.731 * Looking for test storage... 00:09:52.731 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:52.731 03:54:35 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:52.731 03:54:35 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:09:52.731 03:54:35 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:52.990 03:54:35 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:52.990 03:54:35 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:52.990 03:54:35 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:52.990 03:54:35 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:52.990 03:54:35 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:52.990 03:54:35 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:52.990 03:54:35 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:52.990 03:54:35 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:52.990 03:54:35 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:52.990 03:54:35 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:52.990 03:54:35 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:52.990 03:54:35 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:52.990 03:54:35 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:52.990 03:54:35 nvme -- scripts/common.sh@345 -- # : 1 00:09:52.990 03:54:35 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:52.990 03:54:35 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:52.990 03:54:35 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:52.990 03:54:35 nvme -- scripts/common.sh@353 -- # local d=1 00:09:52.990 03:54:35 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:52.990 03:54:35 nvme -- scripts/common.sh@355 -- # echo 1 00:09:52.990 03:54:35 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:52.991 03:54:35 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:52.991 03:54:35 nvme -- scripts/common.sh@353 -- # local d=2 00:09:52.991 03:54:35 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:52.991 03:54:35 nvme -- scripts/common.sh@355 -- # echo 2 00:09:52.991 03:54:35 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:52.991 03:54:35 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:52.991 03:54:35 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:52.991 03:54:35 nvme -- scripts/common.sh@368 -- # return 0 00:09:52.991 03:54:35 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:52.991 03:54:35 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.991 --rc genhtml_branch_coverage=1 00:09:52.991 --rc genhtml_function_coverage=1 00:09:52.991 --rc genhtml_legend=1 00:09:52.991 --rc geninfo_all_blocks=1 00:09:52.991 --rc geninfo_unexecuted_blocks=1 00:09:52.991 00:09:52.991 ' 00:09:52.991 03:54:35 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.991 --rc genhtml_branch_coverage=1 00:09:52.991 --rc genhtml_function_coverage=1 00:09:52.991 --rc genhtml_legend=1 00:09:52.991 --rc geninfo_all_blocks=1 00:09:52.991 --rc geninfo_unexecuted_blocks=1 00:09:52.991 00:09:52.991 ' 00:09:52.991 03:54:35 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.991 --rc genhtml_branch_coverage=1 00:09:52.991 --rc genhtml_function_coverage=1 00:09:52.991 --rc genhtml_legend=1 00:09:52.991 --rc geninfo_all_blocks=1 00:09:52.991 --rc geninfo_unexecuted_blocks=1 00:09:52.991 00:09:52.991 ' 00:09:52.991 03:54:35 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:52.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:52.991 --rc genhtml_branch_coverage=1 00:09:52.991 --rc genhtml_function_coverage=1 00:09:52.991 --rc genhtml_legend=1 00:09:52.991 --rc geninfo_all_blocks=1 00:09:52.991 --rc geninfo_unexecuted_blocks=1 00:09:52.991 00:09:52.991 ' 00:09:52.991 03:54:35 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:53.560 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.499 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.499 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.499 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.499 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.759 03:54:37 nvme -- nvme/nvme.sh@79 -- # uname 00:09:54.759 03:54:37 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:54.759 03:54:37 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:54.759 03:54:37 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1075 -- # stubpid=64028 00:09:54.759 Waiting for stub to ready for secondary processes... 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64028 ]] 00:09:54.759 03:54:37 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:54.759 [2024-12-07 03:54:37.310116] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:09:54.759 [2024-12-07 03:54:37.310242] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:55.698 03:54:38 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:55.698 03:54:38 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64028 ]] 00:09:55.698 03:54:38 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:09:55.698 [2024-12-07 03:54:38.352688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.959 [2024-12-07 03:54:38.479302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.959 [2024-12-07 03:54:38.479451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.959 [2024-12-07 03:54:38.479491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:55.959 [2024-12-07 03:54:38.497638] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:55.959 [2024-12-07 03:54:38.497691] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:55.959 [2024-12-07 03:54:38.514143] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:55.959 [2024-12-07 03:54:38.514254] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:55.959 [2024-12-07 03:54:38.517476] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:55.959 [2024-12-07 03:54:38.518122] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:55.959 [2024-12-07 03:54:38.518305] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:55.959 [2024-12-07 03:54:38.526874] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:55.959 [2024-12-07 03:54:38.527357] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:55.959 [2024-12-07 03:54:38.527532] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:55.959 [2024-12-07 03:54:38.533636] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:55.959 [2024-12-07 03:54:38.534115] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:55.959 [2024-12-07 03:54:38.534250] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:55.959 [2024-12-07 03:54:38.534338] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:55.959 [2024-12-07 03:54:38.534421] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:56.898 03:54:39 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:56.898 done. 00:09:56.898 03:54:39 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:09:56.898 03:54:39 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:56.898 03:54:39 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:09:56.898 03:54:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.898 03:54:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:56.898 ************************************ 00:09:56.898 START TEST nvme_reset 00:09:56.898 ************************************ 00:09:56.898 03:54:39 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:56.898 Initializing NVMe Controllers 00:09:56.898 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:56.898 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:56.898 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:56.898 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:56.898 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:56.898 00:09:56.898 real 0m0.353s 00:09:56.898 user 0m0.130s 00:09:56.898 sys 0m0.175s 00:09:56.898 03:54:39 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.898 ************************************ 00:09:56.898 03:54:39 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:57.158 END TEST nvme_reset 00:09:57.158 ************************************ 00:09:57.158 03:54:39 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:57.158 03:54:39 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.158 03:54:39 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.158 03:54:39 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:57.158 ************************************ 00:09:57.158 START TEST nvme_identify 00:09:57.158 ************************************ 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:09:57.158 03:54:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:57.158 03:54:39 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:57.158 03:54:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:57.158 03:54:39 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:57.158 03:54:39 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:57.158 03:54:39 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:57.420 [2024-12-07 03:54:40.082419] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64062 terminated unexpected 00:09:57.420 ===================================================== 00:09:57.420 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:57.420 ===================================================== 00:09:57.420 Controller Capabilities/Features 00:09:57.420 ================================ 00:09:57.421 Vendor ID: 1b36 00:09:57.421 Subsystem Vendor ID: 1af4 00:09:57.421 Serial Number: 12340 00:09:57.421 Model Number: QEMU NVMe Ctrl 00:09:57.421 Firmware Version: 8.0.0 00:09:57.421 Recommended Arb Burst: 6 00:09:57.421 IEEE OUI Identifier: 00 54 52 00:09:57.421 Multi-path I/O 00:09:57.421 May have multiple subsystem ports: No 00:09:57.421 May have multiple controllers: No 00:09:57.421 Associated with SR-IOV VF: No 00:09:57.421 Max Data Transfer Size: 524288 00:09:57.421 Max Number of Namespaces: 256 00:09:57.421 Max Number of I/O Queues: 64 00:09:57.421 NVMe Specification Version (VS): 1.4 00:09:57.421 NVMe Specification Version (Identify): 1.4 00:09:57.421 Maximum Queue Entries: 2048 00:09:57.421 Contiguous Queues Required: Yes 00:09:57.421 Arbitration Mechanisms Supported 00:09:57.421 Weighted Round Robin: Not Supported 00:09:57.421 Vendor Specific: Not Supported 00:09:57.421 Reset Timeout: 7500 ms 00:09:57.421 Doorbell Stride: 4 bytes 00:09:57.421 NVM Subsystem Reset: Not Supported 00:09:57.421 Command Sets Supported 00:09:57.421 NVM Command Set: Supported 00:09:57.421 Boot Partition: Not Supported 00:09:57.421 Memory Page Size Minimum: 4096 bytes 00:09:57.421 Memory Page Size Maximum: 65536 bytes 00:09:57.421 Persistent Memory Region: Not Supported 00:09:57.421 Optional Asynchronous Events Supported 00:09:57.421 Namespace Attribute Notices: Supported 00:09:57.421 Firmware Activation Notices: Not Supported 00:09:57.421 ANA Change Notices: Not Supported 00:09:57.421 PLE Aggregate Log Change Notices: Not Supported 00:09:57.421 LBA Status Info Alert Notices: Not Supported 00:09:57.421 EGE Aggregate Log Change Notices: Not Supported 00:09:57.421 Normal NVM Subsystem Shutdown event: Not Supported 00:09:57.421 Zone Descriptor Change Notices: Not Supported 00:09:57.421 Discovery Log Change Notices: Not Supported 00:09:57.421 Controller Attributes 00:09:57.421 128-bit Host Identifier: Not Supported 00:09:57.421 Non-Operational Permissive Mode: Not Supported 00:09:57.421 NVM Sets: Not Supported 00:09:57.421 Read Recovery Levels: Not Supported 00:09:57.421 Endurance Groups: Not Supported 00:09:57.421 Predictable Latency Mode: Not Supported 00:09:57.421 Traffic Based Keep ALive: Not Supported 00:09:57.421 Namespace Granularity: Not Supported 00:09:57.421 SQ Associations: Not Supported 00:09:57.421 UUID List: Not Supported 00:09:57.421 Multi-Domain Subsystem: Not Supported 00:09:57.421 Fixed Capacity Management: Not Supported 00:09:57.421 Variable Capacity Management: Not Supported 00:09:57.421 Delete Endurance Group: Not Supported 00:09:57.421 Delete NVM Set: Not Supported 00:09:57.421 Extended LBA Formats Supported: Supported 00:09:57.421 Flexible Data Placement Supported: Not Supported 00:09:57.421 00:09:57.421 Controller Memory Buffer Support 00:09:57.421 ================================ 00:09:57.421 Supported: No 00:09:57.421 00:09:57.421 Persistent Memory Region Support 00:09:57.421 ================================ 00:09:57.421 Supported: No 00:09:57.421 00:09:57.421 Admin Command Set Attributes 00:09:57.421 ============================ 00:09:57.421 Security Send/Receive: Not Supported 00:09:57.421 Format NVM: Supported 00:09:57.421 Firmware Activate/Download: Not Supported 00:09:57.421 Namespace Management: Supported 00:09:57.421 Device Self-Test: Not Supported 00:09:57.421 Directives: Supported 00:09:57.421 NVMe-MI: Not Supported 00:09:57.421 Virtualization Management: Not Supported 00:09:57.421 Doorbell Buffer Config: Supported 00:09:57.421 Get LBA Status Capability: Not Supported 00:09:57.421 Command & Feature Lockdown Capability: Not Supported 00:09:57.421 Abort Command Limit: 4 00:09:57.421 Async Event Request Limit: 4 00:09:57.421 Number of Firmware Slots: N/A 00:09:57.421 Firmware Slot 1 Read-Only: N/A 00:09:57.421 Firmware Activation Without Reset: N/A 00:09:57.421 Multiple Update Detection Support: N/A 00:09:57.421 Firmware Update Granularity: No Information Provided 00:09:57.421 Per-Namespace SMART Log: Yes 00:09:57.421 Asymmetric Namespace Access Log Page: Not Supported 00:09:57.421 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:57.421 Command Effects Log Page: Supported 00:09:57.421 Get Log Page Extended Data: Supported 00:09:57.421 Telemetry Log Pages: Not Supported 00:09:57.421 Persistent Event Log Pages: Not Supported 00:09:57.421 Supported Log Pages Log Page: May Support 00:09:57.421 Commands Supported & Effects Log Page: Not Supported 00:09:57.421 Feature Identifiers & Effects Log Page:May Support 00:09:57.421 NVMe-MI Commands & Effects Log Page: May Support 00:09:57.421 Data Area 4 for Telemetry Log: Not Supported 00:09:57.421 Error Log Page Entries Supported: 1 00:09:57.421 Keep Alive: Not Supported 00:09:57.421 00:09:57.421 NVM Command Set Attributes 00:09:57.421 ========================== 00:09:57.421 Submission Queue Entry Size 00:09:57.421 Max: 64 00:09:57.421 Min: 64 00:09:57.421 Completion Queue Entry Size 00:09:57.421 Max: 16 00:09:57.421 Min: 16 00:09:57.421 Number of Namespaces: 256 00:09:57.421 Compare Command: Supported 00:09:57.421 Write Uncorrectable Command: Not Supported 00:09:57.421 Dataset Management Command: Supported 00:09:57.421 Write Zeroes Command: Supported 00:09:57.421 Set Features Save Field: Supported 00:09:57.421 Reservations: Not Supported 00:09:57.421 Timestamp: Supported 00:09:57.421 Copy: Supported 00:09:57.421 Volatile Write Cache: Present 00:09:57.421 Atomic Write Unit (Normal): 1 00:09:57.421 Atomic Write Unit (PFail): 1 00:09:57.421 Atomic Compare & Write Unit: 1 00:09:57.421 Fused Compare & Write: Not Supported 00:09:57.421 Scatter-Gather List 00:09:57.421 SGL Command Set: Supported 00:09:57.421 SGL Keyed: Not Supported 00:09:57.421 SGL Bit Bucket Descriptor: Not Supported 00:09:57.421 SGL Metadata Pointer: Not Supported 00:09:57.421 Oversized SGL: Not Supported 00:09:57.421 SGL Metadata Address: Not Supported 00:09:57.421 SGL Offset: Not Supported 00:09:57.421 Transport SGL Data Block: Not Supported 00:09:57.421 Replay Protected Memory Block: Not Supported 00:09:57.421 00:09:57.421 Firmware Slot Information 00:09:57.421 ========================= 00:09:57.421 Active slot: 1 00:09:57.421 Slot 1 Firmware Revision: 1.0 00:09:57.421 00:09:57.421 00:09:57.421 Commands Supported and Effects 00:09:57.421 ============================== 00:09:57.421 Admin Commands 00:09:57.421 -------------- 00:09:57.421 Delete I/O Submission Queue (00h): Supported 00:09:57.421 Create I/O Submission Queue (01h): Supported 00:09:57.421 Get Log Page (02h): Supported 00:09:57.421 Delete I/O Completion Queue (04h): Supported 00:09:57.421 Create I/O Completion Queue (05h): Supported 00:09:57.421 Identify (06h): Supported 00:09:57.421 Abort (08h): Supported 00:09:57.421 Set Features (09h): Supported 00:09:57.421 Get Features (0Ah): Supported 00:09:57.421 Asynchronous Event Request (0Ch): Supported 00:09:57.421 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:57.421 Directive Send (19h): Supported 00:09:57.421 Directive Receive (1Ah): Supported 00:09:57.421 Virtualization Management (1Ch): Supported 00:09:57.421 Doorbell Buffer Config (7Ch): Supported 00:09:57.421 Format NVM (80h): Supported LBA-Change 00:09:57.421 I/O Commands 00:09:57.421 ------------ 00:09:57.421 Flush (00h): Supported LBA-Change 00:09:57.421 Write (01h): Supported LBA-Change 00:09:57.421 Read (02h): Supported 00:09:57.421 Compare (05h): Supported 00:09:57.421 Write Zeroes (08h): Supported LBA-Change 00:09:57.421 Dataset Management (09h): Supported LBA-Change 00:09:57.421 Unknown (0Ch): Supported 00:09:57.421 Unknown (12h): Supported 00:09:57.421 Copy (19h): Supported LBA-Change 00:09:57.421 Unknown (1Dh): Supported LBA-Change 00:09:57.421 00:09:57.421 Error Log 00:09:57.421 ========= 00:09:57.421 00:09:57.421 Arbitration 00:09:57.421 =========== 00:09:57.421 Arbitration Burst: no limit 00:09:57.421 00:09:57.421 Power Management 00:09:57.421 ================ 00:09:57.421 Number of Power States: 1 00:09:57.421 Current Power State: Power State #0 00:09:57.421 Power State #0: 00:09:57.421 Max Power: 25.00 W 00:09:57.421 Non-Operational State: Operational 00:09:57.421 Entry Latency: 16 microseconds 00:09:57.421 Exit Latency: 4 microseconds 00:09:57.421 Relative Read Throughput: 0 00:09:57.421 Relative Read Latency: 0 00:09:57.421 Relative Write Throughput: 0 00:09:57.421 Relative Write Latency: 0 00:09:57.421 Idle Power[2024-12-07 03:54:40.083795] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64062 terminated unexpected 00:09:57.421 : Not Reported 00:09:57.421 Active Power: Not Reported 00:09:57.422 Non-Operational Permissive Mode: Not Supported 00:09:57.422 00:09:57.422 Health Information 00:09:57.422 ================== 00:09:57.422 Critical Warnings: 00:09:57.422 Available Spare Space: OK 00:09:57.422 Temperature: OK 00:09:57.422 Device Reliability: OK 00:09:57.422 Read Only: No 00:09:57.422 Volatile Memory Backup: OK 00:09:57.422 Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.422 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:57.422 Available Spare: 0% 00:09:57.422 Available Spare Threshold: 0% 00:09:57.422 Life Percentage Used: 0% 00:09:57.422 Data Units Read: 732 00:09:57.422 Data Units Written: 660 00:09:57.422 Host Read Commands: 30160 00:09:57.422 Host Write Commands: 29946 00:09:57.422 Controller Busy Time: 0 minutes 00:09:57.422 Power Cycles: 0 00:09:57.422 Power On Hours: 0 hours 00:09:57.422 Unsafe Shutdowns: 0 00:09:57.422 Unrecoverable Media Errors: 0 00:09:57.422 Lifetime Error Log Entries: 0 00:09:57.422 Warning Temperature Time: 0 minutes 00:09:57.422 Critical Temperature Time: 0 minutes 00:09:57.422 00:09:57.422 Number of Queues 00:09:57.422 ================ 00:09:57.422 Number of I/O Submission Queues: 64 00:09:57.422 Number of I/O Completion Queues: 64 00:09:57.422 00:09:57.422 ZNS Specific Controller Data 00:09:57.422 ============================ 00:09:57.422 Zone Append Size Limit: 0 00:09:57.422 00:09:57.422 00:09:57.422 Active Namespaces 00:09:57.422 ================= 00:09:57.422 Namespace ID:1 00:09:57.422 Error Recovery Timeout: Unlimited 00:09:57.422 Command Set Identifier: NVM (00h) 00:09:57.422 Deallocate: Supported 00:09:57.422 Deallocated/Unwritten Error: Supported 00:09:57.422 Deallocated Read Value: All 0x00 00:09:57.422 Deallocate in Write Zeroes: Not Supported 00:09:57.422 Deallocated Guard Field: 0xFFFF 00:09:57.422 Flush: Supported 00:09:57.422 Reservation: Not Supported 00:09:57.422 Metadata Transferred as: Separate Metadata Buffer 00:09:57.422 Namespace Sharing Capabilities: Private 00:09:57.422 Size (in LBAs): 1548666 (5GiB) 00:09:57.422 Capacity (in LBAs): 1548666 (5GiB) 00:09:57.422 Utilization (in LBAs): 1548666 (5GiB) 00:09:57.422 Thin Provisioning: Not Supported 00:09:57.422 Per-NS Atomic Units: No 00:09:57.422 Maximum Single Source Range Length: 128 00:09:57.422 Maximum Copy Length: 128 00:09:57.422 Maximum Source Range Count: 128 00:09:57.422 NGUID/EUI64 Never Reused: No 00:09:57.422 Namespace Write Protected: No 00:09:57.422 Number of LBA Formats: 8 00:09:57.422 Current LBA Format: LBA Format #07 00:09:57.422 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.422 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.422 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.422 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.422 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.422 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.422 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.422 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.422 00:09:57.422 NVM Specific Namespace Data 00:09:57.422 =========================== 00:09:57.422 Logical Block Storage Tag Mask: 0 00:09:57.422 Protection Information Capabilities: 00:09:57.422 16b Guard Protection Information Storage Tag Support: No 00:09:57.422 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.422 Storage Tag Check Read Support: No 00:09:57.422 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.422 ===================================================== 00:09:57.422 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:57.422 ===================================================== 00:09:57.422 Controller Capabilities/Features 00:09:57.422 ================================ 00:09:57.422 Vendor ID: 1b36 00:09:57.422 Subsystem Vendor ID: 1af4 00:09:57.422 Serial Number: 12341 00:09:57.422 Model Number: QEMU NVMe Ctrl 00:09:57.422 Firmware Version: 8.0.0 00:09:57.422 Recommended Arb Burst: 6 00:09:57.422 IEEE OUI Identifier: 00 54 52 00:09:57.422 Multi-path I/O 00:09:57.422 May have multiple subsystem ports: No 00:09:57.422 May have multiple controllers: No 00:09:57.422 Associated with SR-IOV VF: No 00:09:57.422 Max Data Transfer Size: 524288 00:09:57.422 Max Number of Namespaces: 256 00:09:57.422 Max Number of I/O Queues: 64 00:09:57.422 NVMe Specification Version (VS): 1.4 00:09:57.422 NVMe Specification Version (Identify): 1.4 00:09:57.422 Maximum Queue Entries: 2048 00:09:57.422 Contiguous Queues Required: Yes 00:09:57.422 Arbitration Mechanisms Supported 00:09:57.422 Weighted Round Robin: Not Supported 00:09:57.422 Vendor Specific: Not Supported 00:09:57.422 Reset Timeout: 7500 ms 00:09:57.422 Doorbell Stride: 4 bytes 00:09:57.422 NVM Subsystem Reset: Not Supported 00:09:57.422 Command Sets Supported 00:09:57.422 NVM Command Set: Supported 00:09:57.422 Boot Partition: Not Supported 00:09:57.422 Memory Page Size Minimum: 4096 bytes 00:09:57.422 Memory Page Size Maximum: 65536 bytes 00:09:57.422 Persistent Memory Region: Not Supported 00:09:57.422 Optional Asynchronous Events Supported 00:09:57.422 Namespace Attribute Notices: Supported 00:09:57.422 Firmware Activation Notices: Not Supported 00:09:57.422 ANA Change Notices: Not Supported 00:09:57.422 PLE Aggregate Log Change Notices: Not Supported 00:09:57.422 LBA Status Info Alert Notices: Not Supported 00:09:57.422 EGE Aggregate Log Change Notices: Not Supported 00:09:57.422 Normal NVM Subsystem Shutdown event: Not Supported 00:09:57.422 Zone Descriptor Change Notices: Not Supported 00:09:57.422 Discovery Log Change Notices: Not Supported 00:09:57.422 Controller Attributes 00:09:57.422 128-bit Host Identifier: Not Supported 00:09:57.422 Non-Operational Permissive Mode: Not Supported 00:09:57.422 NVM Sets: Not Supported 00:09:57.422 Read Recovery Levels: Not Supported 00:09:57.422 Endurance Groups: Not Supported 00:09:57.422 Predictable Latency Mode: Not Supported 00:09:57.422 Traffic Based Keep ALive: Not Supported 00:09:57.422 Namespace Granularity: Not Supported 00:09:57.422 SQ Associations: Not Supported 00:09:57.422 UUID List: Not Supported 00:09:57.422 Multi-Domain Subsystem: Not Supported 00:09:57.422 Fixed Capacity Management: Not Supported 00:09:57.422 Variable Capacity Management: Not Supported 00:09:57.422 Delete Endurance Group: Not Supported 00:09:57.422 Delete NVM Set: Not Supported 00:09:57.422 Extended LBA Formats Supported: Supported 00:09:57.422 Flexible Data Placement Supported: Not Supported 00:09:57.422 00:09:57.422 Controller Memory Buffer Support 00:09:57.422 ================================ 00:09:57.422 Supported: No 00:09:57.422 00:09:57.422 Persistent Memory Region Support 00:09:57.422 ================================ 00:09:57.422 Supported: No 00:09:57.422 00:09:57.422 Admin Command Set Attributes 00:09:57.422 ============================ 00:09:57.422 Security Send/Receive: Not Supported 00:09:57.422 Format NVM: Supported 00:09:57.422 Firmware Activate/Download: Not Supported 00:09:57.422 Namespace Management: Supported 00:09:57.422 Device Self-Test: Not Supported 00:09:57.422 Directives: Supported 00:09:57.422 NVMe-MI: Not Supported 00:09:57.422 Virtualization Management: Not Supported 00:09:57.422 Doorbell Buffer Config: Supported 00:09:57.422 Get LBA Status Capability: Not Supported 00:09:57.422 Command & Feature Lockdown Capability: Not Supported 00:09:57.422 Abort Command Limit: 4 00:09:57.422 Async Event Request Limit: 4 00:09:57.422 Number of Firmware Slots: N/A 00:09:57.422 Firmware Slot 1 Read-Only: N/A 00:09:57.422 Firmware Activation Without Reset: N/A 00:09:57.422 Multiple Update Detection Support: N/A 00:09:57.422 Firmware Update Granularity: No Information Provided 00:09:57.422 Per-Namespace SMART Log: Yes 00:09:57.422 Asymmetric Namespace Access Log Page: Not Supported 00:09:57.422 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:57.422 Command Effects Log Page: Supported 00:09:57.422 Get Log Page Extended Data: Supported 00:09:57.422 Telemetry Log Pages: Not Supported 00:09:57.422 Persistent Event Log Pages: Not Supported 00:09:57.422 Supported Log Pages Log Page: May Support 00:09:57.422 Commands Supported & Effects Log Page: Not Supported 00:09:57.422 Feature Identifiers & Effects Log Page:May Support 00:09:57.422 NVMe-MI Commands & Effects Log Page: May Support 00:09:57.423 Data Area 4 for Telemetry Log: Not Supported 00:09:57.423 Error Log Page Entries Supported: 1 00:09:57.423 Keep Alive: Not Supported 00:09:57.423 00:09:57.423 NVM Command Set Attributes 00:09:57.423 ========================== 00:09:57.423 Submission Queue Entry Size 00:09:57.423 Max: 64 00:09:57.423 Min: 64 00:09:57.423 Completion Queue Entry Size 00:09:57.423 Max: 16 00:09:57.423 Min: 16 00:09:57.423 Number of Namespaces: 256 00:09:57.423 Compare Command: Supported 00:09:57.423 Write Uncorrectable Command: Not Supported 00:09:57.423 Dataset Management Command: Supported 00:09:57.423 Write Zeroes Command: Supported 00:09:57.423 Set Features Save Field: Supported 00:09:57.423 Reservations: Not Supported 00:09:57.423 Timestamp: Supported 00:09:57.423 Copy: Supported 00:09:57.423 Volatile Write Cache: Present 00:09:57.423 Atomic Write Unit (Normal): 1 00:09:57.423 Atomic Write Unit (PFail): 1 00:09:57.423 Atomic Compare & Write Unit: 1 00:09:57.423 Fused Compare & Write: Not Supported 00:09:57.423 Scatter-Gather List 00:09:57.423 SGL Command Set: Supported 00:09:57.423 SGL Keyed: Not Supported 00:09:57.423 SGL Bit Bucket Descriptor: Not Supported 00:09:57.423 SGL Metadata Pointer: Not Supported 00:09:57.423 Oversized SGL: Not Supported 00:09:57.423 SGL Metadata Address: Not Supported 00:09:57.423 SGL Offset: Not Supported 00:09:57.423 Transport SGL Data Block: Not Supported 00:09:57.423 Replay Protected Memory Block: Not Supported 00:09:57.423 00:09:57.423 Firmware Slot Information 00:09:57.423 ========================= 00:09:57.423 Active slot: 1 00:09:57.423 Slot 1 Firmware Revision: 1.0 00:09:57.423 00:09:57.423 00:09:57.423 Commands Supported and Effects 00:09:57.423 ============================== 00:09:57.423 Admin Commands 00:09:57.423 -------------- 00:09:57.423 Delete I/O Submission Queue (00h): Supported 00:09:57.423 Create I/O Submission Queue (01h): Supported 00:09:57.423 Get Log Page (02h): Supported 00:09:57.423 Delete I/O Completion Queue (04h): Supported 00:09:57.423 Create I/O Completion Queue (05h): Supported 00:09:57.423 Identify (06h): Supported 00:09:57.423 Abort (08h): Supported 00:09:57.423 Set Features (09h): Supported 00:09:57.423 Get Features (0Ah): Supported 00:09:57.423 Asynchronous Event Request (0Ch): Supported 00:09:57.423 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:57.423 Directive Send (19h): Supported 00:09:57.423 Directive Receive (1Ah): Supported 00:09:57.423 Virtualization Management (1Ch): Supported 00:09:57.423 Doorbell Buffer Config (7Ch): Supported 00:09:57.423 Format NVM (80h): Supported LBA-Change 00:09:57.423 I/O Commands 00:09:57.423 ------------ 00:09:57.423 Flush (00h): Supported LBA-Change 00:09:57.423 Write (01h): Supported LBA-Change 00:09:57.423 Read (02h): Supported 00:09:57.423 Compare (05h): Supported 00:09:57.423 Write Zeroes (08h): Supported LBA-Change 00:09:57.423 Dataset Management (09h): Supported LBA-Change 00:09:57.423 Unknown (0Ch): Supported 00:09:57.423 Unknown (12h): Supported 00:09:57.423 Copy (19h): Supported LBA-Change 00:09:57.423 Unknown (1Dh): Supported LBA-Change 00:09:57.423 00:09:57.423 Error Log 00:09:57.423 ========= 00:09:57.423 00:09:57.423 Arbitration 00:09:57.423 =========== 00:09:57.423 Arbitration Burst: no limit 00:09:57.423 00:09:57.423 Power Management 00:09:57.423 ================ 00:09:57.423 Number of Power States: 1 00:09:57.423 Current Power State: Power State #0 00:09:57.423 Power State #0: 00:09:57.423 Max Power: 25.00 W 00:09:57.423 Non-Operational State: Operational 00:09:57.423 Entry Latency: 16 microseconds 00:09:57.423 Exit Latency: 4 microseconds 00:09:57.423 Relative Read Throughput: 0 00:09:57.423 Relative Read Latency: 0 00:09:57.423 Relative Write Throughput: 0 00:09:57.423 Relative Write Latency: 0 00:09:57.423 Idle Power: Not Reported 00:09:57.423 Active Power: Not Reported 00:09:57.423 Non-Operational Permissive Mode: Not Supported 00:09:57.423 00:09:57.423 Health Information 00:09:57.423 ================== 00:09:57.423 Critical Warnings: 00:09:57.423 Available Spare Space: OK 00:09:57.423 Temperature: [2024-12-07 03:54:40.084804] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64062 terminated unexpected 00:09:57.423 OK 00:09:57.423 Device Reliability: OK 00:09:57.423 Read Only: No 00:09:57.423 Volatile Memory Backup: OK 00:09:57.423 Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.423 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:57.423 Available Spare: 0% 00:09:57.423 Available Spare Threshold: 0% 00:09:57.423 Life Percentage Used: 0% 00:09:57.423 Data Units Read: 1134 00:09:57.423 Data Units Written: 1000 00:09:57.423 Host Read Commands: 45477 00:09:57.423 Host Write Commands: 44241 00:09:57.423 Controller Busy Time: 0 minutes 00:09:57.423 Power Cycles: 0 00:09:57.423 Power On Hours: 0 hours 00:09:57.423 Unsafe Shutdowns: 0 00:09:57.423 Unrecoverable Media Errors: 0 00:09:57.423 Lifetime Error Log Entries: 0 00:09:57.423 Warning Temperature Time: 0 minutes 00:09:57.423 Critical Temperature Time: 0 minutes 00:09:57.423 00:09:57.423 Number of Queues 00:09:57.423 ================ 00:09:57.423 Number of I/O Submission Queues: 64 00:09:57.423 Number of I/O Completion Queues: 64 00:09:57.423 00:09:57.423 ZNS Specific Controller Data 00:09:57.423 ============================ 00:09:57.423 Zone Append Size Limit: 0 00:09:57.423 00:09:57.423 00:09:57.423 Active Namespaces 00:09:57.423 ================= 00:09:57.423 Namespace ID:1 00:09:57.423 Error Recovery Timeout: Unlimited 00:09:57.423 Command Set Identifier: NVM (00h) 00:09:57.423 Deallocate: Supported 00:09:57.423 Deallocated/Unwritten Error: Supported 00:09:57.423 Deallocated Read Value: All 0x00 00:09:57.423 Deallocate in Write Zeroes: Not Supported 00:09:57.423 Deallocated Guard Field: 0xFFFF 00:09:57.423 Flush: Supported 00:09:57.423 Reservation: Not Supported 00:09:57.423 Namespace Sharing Capabilities: Private 00:09:57.423 Size (in LBAs): 1310720 (5GiB) 00:09:57.423 Capacity (in LBAs): 1310720 (5GiB) 00:09:57.423 Utilization (in LBAs): 1310720 (5GiB) 00:09:57.423 Thin Provisioning: Not Supported 00:09:57.423 Per-NS Atomic Units: No 00:09:57.423 Maximum Single Source Range Length: 128 00:09:57.423 Maximum Copy Length: 128 00:09:57.423 Maximum Source Range Count: 128 00:09:57.423 NGUID/EUI64 Never Reused: No 00:09:57.423 Namespace Write Protected: No 00:09:57.423 Number of LBA Formats: 8 00:09:57.423 Current LBA Format: LBA Format #04 00:09:57.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.423 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.423 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.423 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.423 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.423 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.423 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.423 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.423 00:09:57.423 NVM Specific Namespace Data 00:09:57.423 =========================== 00:09:57.423 Logical Block Storage Tag Mask: 0 00:09:57.423 Protection Information Capabilities: 00:09:57.423 16b Guard Protection Information Storage Tag Support: No 00:09:57.423 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.423 Storage Tag Check Read Support: No 00:09:57.423 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.423 ===================================================== 00:09:57.423 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:57.423 ===================================================== 00:09:57.423 Controller Capabilities/Features 00:09:57.423 ================================ 00:09:57.423 Vendor ID: 1b36 00:09:57.423 Subsystem Vendor ID: 1af4 00:09:57.423 Serial Number: 12343 00:09:57.423 Model Number: QEMU NVMe Ctrl 00:09:57.423 Firmware Version: 8.0.0 00:09:57.423 Recommended Arb Burst: 6 00:09:57.423 IEEE OUI Identifier: 00 54 52 00:09:57.423 Multi-path I/O 00:09:57.423 May have multiple subsystem ports: No 00:09:57.423 May have multiple controllers: Yes 00:09:57.423 Associated with SR-IOV VF: No 00:09:57.423 Max Data Transfer Size: 524288 00:09:57.423 Max Number of Namespaces: 256 00:09:57.423 Max Number of I/O Queues: 64 00:09:57.424 NVMe Specification Version (VS): 1.4 00:09:57.424 NVMe Specification Version (Identify): 1.4 00:09:57.424 Maximum Queue Entries: 2048 00:09:57.424 Contiguous Queues Required: Yes 00:09:57.424 Arbitration Mechanisms Supported 00:09:57.424 Weighted Round Robin: Not Supported 00:09:57.424 Vendor Specific: Not Supported 00:09:57.424 Reset Timeout: 7500 ms 00:09:57.424 Doorbell Stride: 4 bytes 00:09:57.424 NVM Subsystem Reset: Not Supported 00:09:57.424 Command Sets Supported 00:09:57.424 NVM Command Set: Supported 00:09:57.424 Boot Partition: Not Supported 00:09:57.424 Memory Page Size Minimum: 4096 bytes 00:09:57.424 Memory Page Size Maximum: 65536 bytes 00:09:57.424 Persistent Memory Region: Not Supported 00:09:57.424 Optional Asynchronous Events Supported 00:09:57.424 Namespace Attribute Notices: Supported 00:09:57.424 Firmware Activation Notices: Not Supported 00:09:57.424 ANA Change Notices: Not Supported 00:09:57.424 PLE Aggregate Log Change Notices: Not Supported 00:09:57.424 LBA Status Info Alert Notices: Not Supported 00:09:57.424 EGE Aggregate Log Change Notices: Not Supported 00:09:57.424 Normal NVM Subsystem Shutdown event: Not Supported 00:09:57.424 Zone Descriptor Change Notices: Not Supported 00:09:57.424 Discovery Log Change Notices: Not Supported 00:09:57.424 Controller Attributes 00:09:57.424 128-bit Host Identifier: Not Supported 00:09:57.424 Non-Operational Permissive Mode: Not Supported 00:09:57.424 NVM Sets: Not Supported 00:09:57.424 Read Recovery Levels: Not Supported 00:09:57.424 Endurance Groups: Supported 00:09:57.424 Predictable Latency Mode: Not Supported 00:09:57.424 Traffic Based Keep ALive: Not Supported 00:09:57.424 Namespace Granularity: Not Supported 00:09:57.424 SQ Associations: Not Supported 00:09:57.424 UUID List: Not Supported 00:09:57.424 Multi-Domain Subsystem: Not Supported 00:09:57.424 Fixed Capacity Management: Not Supported 00:09:57.424 Variable Capacity Management: Not Supported 00:09:57.424 Delete Endurance Group: Not Supported 00:09:57.424 Delete NVM Set: Not Supported 00:09:57.424 Extended LBA Formats Supported: Supported 00:09:57.424 Flexible Data Placement Supported: Supported 00:09:57.424 00:09:57.424 Controller Memory Buffer Support 00:09:57.424 ================================ 00:09:57.424 Supported: No 00:09:57.424 00:09:57.424 Persistent Memory Region Support 00:09:57.424 ================================ 00:09:57.424 Supported: No 00:09:57.424 00:09:57.424 Admin Command Set Attributes 00:09:57.424 ============================ 00:09:57.424 Security Send/Receive: Not Supported 00:09:57.424 Format NVM: Supported 00:09:57.424 Firmware Activate/Download: Not Supported 00:09:57.424 Namespace Management: Supported 00:09:57.424 Device Self-Test: Not Supported 00:09:57.424 Directives: Supported 00:09:57.424 NVMe-MI: Not Supported 00:09:57.424 Virtualization Management: Not Supported 00:09:57.424 Doorbell Buffer Config: Supported 00:09:57.424 Get LBA Status Capability: Not Supported 00:09:57.424 Command & Feature Lockdown Capability: Not Supported 00:09:57.424 Abort Command Limit: 4 00:09:57.424 Async Event Request Limit: 4 00:09:57.424 Number of Firmware Slots: N/A 00:09:57.424 Firmware Slot 1 Read-Only: N/A 00:09:57.424 Firmware Activation Without Reset: N/A 00:09:57.424 Multiple Update Detection Support: N/A 00:09:57.424 Firmware Update Granularity: No Information Provided 00:09:57.424 Per-Namespace SMART Log: Yes 00:09:57.424 Asymmetric Namespace Access Log Page: Not Supported 00:09:57.424 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:57.424 Command Effects Log Page: Supported 00:09:57.424 Get Log Page Extended Data: Supported 00:09:57.424 Telemetry Log Pages: Not Supported 00:09:57.424 Persistent Event Log Pages: Not Supported 00:09:57.424 Supported Log Pages Log Page: May Support 00:09:57.424 Commands Supported & Effects Log Page: Not Supported 00:09:57.424 Feature Identifiers & Effects Log Page:May Support 00:09:57.424 NVMe-MI Commands & Effects Log Page: May Support 00:09:57.424 Data Area 4 for Telemetry Log: Not Supported 00:09:57.424 Error Log Page Entries Supported: 1 00:09:57.424 Keep Alive: Not Supported 00:09:57.424 00:09:57.424 NVM Command Set Attributes 00:09:57.424 ========================== 00:09:57.424 Submission Queue Entry Size 00:09:57.424 Max: 64 00:09:57.424 Min: 64 00:09:57.424 Completion Queue Entry Size 00:09:57.424 Max: 16 00:09:57.424 Min: 16 00:09:57.424 Number of Namespaces: 256 00:09:57.424 Compare Command: Supported 00:09:57.424 Write Uncorrectable Command: Not Supported 00:09:57.424 Dataset Management Command: Supported 00:09:57.424 Write Zeroes Command: Supported 00:09:57.424 Set Features Save Field: Supported 00:09:57.424 Reservations: Not Supported 00:09:57.424 Timestamp: Supported 00:09:57.424 Copy: Supported 00:09:57.424 Volatile Write Cache: Present 00:09:57.424 Atomic Write Unit (Normal): 1 00:09:57.424 Atomic Write Unit (PFail): 1 00:09:57.424 Atomic Compare & Write Unit: 1 00:09:57.424 Fused Compare & Write: Not Supported 00:09:57.424 Scatter-Gather List 00:09:57.424 SGL Command Set: Supported 00:09:57.424 SGL Keyed: Not Supported 00:09:57.424 SGL Bit Bucket Descriptor: Not Supported 00:09:57.424 SGL Metadata Pointer: Not Supported 00:09:57.424 Oversized SGL: Not Supported 00:09:57.424 SGL Metadata Address: Not Supported 00:09:57.424 SGL Offset: Not Supported 00:09:57.424 Transport SGL Data Block: Not Supported 00:09:57.424 Replay Protected Memory Block: Not Supported 00:09:57.424 00:09:57.424 Firmware Slot Information 00:09:57.424 ========================= 00:09:57.424 Active slot: 1 00:09:57.424 Slot 1 Firmware Revision: 1.0 00:09:57.424 00:09:57.424 00:09:57.424 Commands Supported and Effects 00:09:57.424 ============================== 00:09:57.424 Admin Commands 00:09:57.424 -------------- 00:09:57.424 Delete I/O Submission Queue (00h): Supported 00:09:57.424 Create I/O Submission Queue (01h): Supported 00:09:57.424 Get Log Page (02h): Supported 00:09:57.424 Delete I/O Completion Queue (04h): Supported 00:09:57.424 Create I/O Completion Queue (05h): Supported 00:09:57.424 Identify (06h): Supported 00:09:57.424 Abort (08h): Supported 00:09:57.424 Set Features (09h): Supported 00:09:57.424 Get Features (0Ah): Supported 00:09:57.424 Asynchronous Event Request (0Ch): Supported 00:09:57.424 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:57.424 Directive Send (19h): Supported 00:09:57.424 Directive Receive (1Ah): Supported 00:09:57.424 Virtualization Management (1Ch): Supported 00:09:57.424 Doorbell Buffer Config (7Ch): Supported 00:09:57.424 Format NVM (80h): Supported LBA-Change 00:09:57.424 I/O Commands 00:09:57.424 ------------ 00:09:57.424 Flush (00h): Supported LBA-Change 00:09:57.424 Write (01h): Supported LBA-Change 00:09:57.424 Read (02h): Supported 00:09:57.424 Compare (05h): Supported 00:09:57.424 Write Zeroes (08h): Supported LBA-Change 00:09:57.424 Dataset Management (09h): Supported LBA-Change 00:09:57.424 Unknown (0Ch): Supported 00:09:57.424 Unknown (12h): Supported 00:09:57.424 Copy (19h): Supported LBA-Change 00:09:57.424 Unknown (1Dh): Supported LBA-Change 00:09:57.424 00:09:57.424 Error Log 00:09:57.424 ========= 00:09:57.424 00:09:57.424 Arbitration 00:09:57.424 =========== 00:09:57.424 Arbitration Burst: no limit 00:09:57.424 00:09:57.424 Power Management 00:09:57.424 ================ 00:09:57.424 Number of Power States: 1 00:09:57.424 Current Power State: Power State #0 00:09:57.424 Power State #0: 00:09:57.424 Max Power: 25.00 W 00:09:57.424 Non-Operational State: Operational 00:09:57.424 Entry Latency: 16 microseconds 00:09:57.424 Exit Latency: 4 microseconds 00:09:57.424 Relative Read Throughput: 0 00:09:57.424 Relative Read Latency: 0 00:09:57.424 Relative Write Throughput: 0 00:09:57.424 Relative Write Latency: 0 00:09:57.424 Idle Power: Not Reported 00:09:57.424 Active Power: Not Reported 00:09:57.424 Non-Operational Permissive Mode: Not Supported 00:09:57.424 00:09:57.424 Health Information 00:09:57.424 ================== 00:09:57.424 Critical Warnings: 00:09:57.424 Available Spare Space: OK 00:09:57.424 Temperature: OK 00:09:57.424 Device Reliability: OK 00:09:57.424 Read Only: No 00:09:57.424 Volatile Memory Backup: OK 00:09:57.424 Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.424 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:57.424 Available Spare: 0% 00:09:57.424 Available Spare Threshold: 0% 00:09:57.424 Life Percentage Used: 0% 00:09:57.424 Data Units Read: 1062 00:09:57.424 Data Units Written: 991 00:09:57.424 Host Read Commands: 33097 00:09:57.424 Host Write Commands: 32520 00:09:57.424 Controller Busy Time: 0 minutes 00:09:57.424 Power Cycles: 0 00:09:57.424 Power On Hours: 0 hours 00:09:57.425 Unsafe Shutdowns: 0 00:09:57.425 Unrecoverable Media Errors: 0 00:09:57.425 Lifetime Error Log Entries: 0 00:09:57.425 Warning Temperature Time: 0 minutes 00:09:57.425 Critical Temperature Time: 0 minutes 00:09:57.425 00:09:57.425 Number of Queues 00:09:57.425 ================ 00:09:57.425 Number of I/O Submission Queues: 64 00:09:57.425 Number of I/O Completion Queues: 64 00:09:57.425 00:09:57.425 ZNS Specific Controller Data 00:09:57.425 ============================ 00:09:57.425 Zone Append Size Limit: 0 00:09:57.425 00:09:57.425 00:09:57.425 Active Namespaces 00:09:57.425 ================= 00:09:57.425 Namespace ID:1 00:09:57.425 Error Recovery Timeout: Unlimited 00:09:57.425 Command Set Identifier: NVM (00h) 00:09:57.425 Deallocate: Supported 00:09:57.425 Deallocated/Unwritten Error: Supported 00:09:57.425 Deallocated Read Value: All 0x00 00:09:57.425 Deallocate in Write Zeroes: Not Supported 00:09:57.425 Deallocated Guard Field: 0xFFFF 00:09:57.425 Flush: Supported 00:09:57.425 Reservation: Not Supported 00:09:57.425 Namespace Sharing Capabilities: Multiple Controllers 00:09:57.425 Size (in LBAs): 262144 (1GiB) 00:09:57.425 Capacity (in LBAs): 262144 (1GiB) 00:09:57.425 Utilization (in LBAs): 262144 (1GiB) 00:09:57.425 Thin Provisioning: Not Supported 00:09:57.425 Per-NS Atomic Units: No 00:09:57.425 Maximum Single Source Range Length: 128 00:09:57.425 Maximum Copy Length: 128 00:09:57.425 Maximum Source Range Count: 128 00:09:57.425 NGUID/EUI64 Never Reused: No 00:09:57.425 Namespace Write Protected: No 00:09:57.425 Endurance group ID: 1 00:09:57.425 Number of LBA Formats: 8 00:09:57.425 Current LBA Format: LBA Format #04 00:09:57.425 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.425 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.425 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.425 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.425 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.425 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.425 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.425 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.425 00:09:57.425 Get Feature FDP: 00:09:57.425 ================ 00:09:57.425 Enabled: Yes 00:09:57.425 FDP configuration index: 0 00:09:57.425 00:09:57.425 FDP configurations log page 00:09:57.425 =========================== 00:09:57.425 Number of FDP configurations: 1 00:09:57.425 Version: 0 00:09:57.425 Size: 112 00:09:57.425 FDP Configuration Descriptor: 0 00:09:57.425 Descriptor Size: 96 00:09:57.425 Reclaim Group Identifier format: 2 00:09:57.425 FDP Volatile Write Cache: Not Present 00:09:57.425 FDP Configuration: Valid 00:09:57.425 Vendor Specific Size: 0 00:09:57.425 Number of Reclaim Groups: 2 00:09:57.425 Number of Recalim Unit Handles: 8 00:09:57.425 Max Placement Identifiers: 128 00:09:57.425 Number of Namespaces Suppprted: 256 00:09:57.425 Reclaim unit Nominal Size: 6000000 bytes 00:09:57.425 Estimated Reclaim Unit Time Limit: Not Reported 00:09:57.425 RUH Desc #000: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #001: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #002: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #003: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #004: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #005: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #006: RUH Type: Initially Isolated 00:09:57.425 RUH Desc #007: RUH Type: Initially Isolated 00:09:57.425 00:09:57.425 FDP reclaim unit handle usage log page 00:09:57.425 ====================================== 00:09:57.425 Number of Reclaim Unit Handles: 8 00:09:57.425 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:57.425 RUH Usage Desc #001: RUH Attributes: Unused 00:09:57.425 RUH Usage Desc #002: RUH Attributes: Unused 00:09:57.425 RUH Usage Desc #003: RUH Attributes: Unused 00:09:57.425 RUH Usage Desc #004: RUH Attributes: Unused 00:09:57.425 RUH Usage Desc #005: RUH Attributes: Unused 00:09:57.425 RUH Usage Desc #006: RUH Attributes: Unused 00:09:57.425 RUH Usage Desc #007: RUH Attributes: Unused 00:09:57.425 00:09:57.425 FDP statistics log page 00:09:57.425 ======================= 00:09:57.425 Host bytes with metadata written: 618504192 00:09:57.425 Me[2024-12-07 03:54:40.086480] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64062 terminated unexpected 00:09:57.425 dia bytes with metadata written: 618586112 00:09:57.425 Media bytes erased: 0 00:09:57.425 00:09:57.425 FDP events log page 00:09:57.425 =================== 00:09:57.425 Number of FDP events: 0 00:09:57.425 00:09:57.425 NVM Specific Namespace Data 00:09:57.425 =========================== 00:09:57.425 Logical Block Storage Tag Mask: 0 00:09:57.425 Protection Information Capabilities: 00:09:57.425 16b Guard Protection Information Storage Tag Support: No 00:09:57.425 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.425 Storage Tag Check Read Support: No 00:09:57.425 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.425 ===================================================== 00:09:57.425 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:57.425 ===================================================== 00:09:57.425 Controller Capabilities/Features 00:09:57.425 ================================ 00:09:57.425 Vendor ID: 1b36 00:09:57.425 Subsystem Vendor ID: 1af4 00:09:57.425 Serial Number: 12342 00:09:57.425 Model Number: QEMU NVMe Ctrl 00:09:57.425 Firmware Version: 8.0.0 00:09:57.425 Recommended Arb Burst: 6 00:09:57.425 IEEE OUI Identifier: 00 54 52 00:09:57.425 Multi-path I/O 00:09:57.425 May have multiple subsystem ports: No 00:09:57.425 May have multiple controllers: No 00:09:57.425 Associated with SR-IOV VF: No 00:09:57.425 Max Data Transfer Size: 524288 00:09:57.425 Max Number of Namespaces: 256 00:09:57.425 Max Number of I/O Queues: 64 00:09:57.425 NVMe Specification Version (VS): 1.4 00:09:57.425 NVMe Specification Version (Identify): 1.4 00:09:57.425 Maximum Queue Entries: 2048 00:09:57.425 Contiguous Queues Required: Yes 00:09:57.425 Arbitration Mechanisms Supported 00:09:57.425 Weighted Round Robin: Not Supported 00:09:57.425 Vendor Specific: Not Supported 00:09:57.425 Reset Timeout: 7500 ms 00:09:57.425 Doorbell Stride: 4 bytes 00:09:57.425 NVM Subsystem Reset: Not Supported 00:09:57.425 Command Sets Supported 00:09:57.425 NVM Command Set: Supported 00:09:57.425 Boot Partition: Not Supported 00:09:57.425 Memory Page Size Minimum: 4096 bytes 00:09:57.425 Memory Page Size Maximum: 65536 bytes 00:09:57.425 Persistent Memory Region: Not Supported 00:09:57.425 Optional Asynchronous Events Supported 00:09:57.425 Namespace Attribute Notices: Supported 00:09:57.425 Firmware Activation Notices: Not Supported 00:09:57.425 ANA Change Notices: Not Supported 00:09:57.425 PLE Aggregate Log Change Notices: Not Supported 00:09:57.425 LBA Status Info Alert Notices: Not Supported 00:09:57.425 EGE Aggregate Log Change Notices: Not Supported 00:09:57.425 Normal NVM Subsystem Shutdown event: Not Supported 00:09:57.426 Zone Descriptor Change Notices: Not Supported 00:09:57.426 Discovery Log Change Notices: Not Supported 00:09:57.426 Controller Attributes 00:09:57.426 128-bit Host Identifier: Not Supported 00:09:57.426 Non-Operational Permissive Mode: Not Supported 00:09:57.426 NVM Sets: Not Supported 00:09:57.426 Read Recovery Levels: Not Supported 00:09:57.426 Endurance Groups: Not Supported 00:09:57.426 Predictable Latency Mode: Not Supported 00:09:57.426 Traffic Based Keep ALive: Not Supported 00:09:57.426 Namespace Granularity: Not Supported 00:09:57.426 SQ Associations: Not Supported 00:09:57.426 UUID List: Not Supported 00:09:57.426 Multi-Domain Subsystem: Not Supported 00:09:57.426 Fixed Capacity Management: Not Supported 00:09:57.426 Variable Capacity Management: Not Supported 00:09:57.426 Delete Endurance Group: Not Supported 00:09:57.426 Delete NVM Set: Not Supported 00:09:57.426 Extended LBA Formats Supported: Supported 00:09:57.426 Flexible Data Placement Supported: Not Supported 00:09:57.426 00:09:57.426 Controller Memory Buffer Support 00:09:57.426 ================================ 00:09:57.426 Supported: No 00:09:57.426 00:09:57.426 Persistent Memory Region Support 00:09:57.426 ================================ 00:09:57.426 Supported: No 00:09:57.426 00:09:57.426 Admin Command Set Attributes 00:09:57.426 ============================ 00:09:57.426 Security Send/Receive: Not Supported 00:09:57.426 Format NVM: Supported 00:09:57.426 Firmware Activate/Download: Not Supported 00:09:57.426 Namespace Management: Supported 00:09:57.426 Device Self-Test: Not Supported 00:09:57.426 Directives: Supported 00:09:57.426 NVMe-MI: Not Supported 00:09:57.426 Virtualization Management: Not Supported 00:09:57.426 Doorbell Buffer Config: Supported 00:09:57.426 Get LBA Status Capability: Not Supported 00:09:57.426 Command & Feature Lockdown Capability: Not Supported 00:09:57.426 Abort Command Limit: 4 00:09:57.426 Async Event Request Limit: 4 00:09:57.426 Number of Firmware Slots: N/A 00:09:57.426 Firmware Slot 1 Read-Only: N/A 00:09:57.426 Firmware Activation Without Reset: N/A 00:09:57.426 Multiple Update Detection Support: N/A 00:09:57.426 Firmware Update Granularity: No Information Provided 00:09:57.426 Per-Namespace SMART Log: Yes 00:09:57.426 Asymmetric Namespace Access Log Page: Not Supported 00:09:57.426 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:57.426 Command Effects Log Page: Supported 00:09:57.426 Get Log Page Extended Data: Supported 00:09:57.426 Telemetry Log Pages: Not Supported 00:09:57.426 Persistent Event Log Pages: Not Supported 00:09:57.426 Supported Log Pages Log Page: May Support 00:09:57.426 Commands Supported & Effects Log Page: Not Supported 00:09:57.426 Feature Identifiers & Effects Log Page:May Support 00:09:57.426 NVMe-MI Commands & Effects Log Page: May Support 00:09:57.426 Data Area 4 for Telemetry Log: Not Supported 00:09:57.426 Error Log Page Entries Supported: 1 00:09:57.426 Keep Alive: Not Supported 00:09:57.426 00:09:57.426 NVM Command Set Attributes 00:09:57.426 ========================== 00:09:57.426 Submission Queue Entry Size 00:09:57.426 Max: 64 00:09:57.426 Min: 64 00:09:57.426 Completion Queue Entry Size 00:09:57.426 Max: 16 00:09:57.426 Min: 16 00:09:57.426 Number of Namespaces: 256 00:09:57.426 Compare Command: Supported 00:09:57.426 Write Uncorrectable Command: Not Supported 00:09:57.426 Dataset Management Command: Supported 00:09:57.426 Write Zeroes Command: Supported 00:09:57.426 Set Features Save Field: Supported 00:09:57.426 Reservations: Not Supported 00:09:57.426 Timestamp: Supported 00:09:57.426 Copy: Supported 00:09:57.426 Volatile Write Cache: Present 00:09:57.426 Atomic Write Unit (Normal): 1 00:09:57.426 Atomic Write Unit (PFail): 1 00:09:57.426 Atomic Compare & Write Unit: 1 00:09:57.426 Fused Compare & Write: Not Supported 00:09:57.426 Scatter-Gather List 00:09:57.426 SGL Command Set: Supported 00:09:57.426 SGL Keyed: Not Supported 00:09:57.426 SGL Bit Bucket Descriptor: Not Supported 00:09:57.426 SGL Metadata Pointer: Not Supported 00:09:57.426 Oversized SGL: Not Supported 00:09:57.426 SGL Metadata Address: Not Supported 00:09:57.426 SGL Offset: Not Supported 00:09:57.426 Transport SGL Data Block: Not Supported 00:09:57.426 Replay Protected Memory Block: Not Supported 00:09:57.426 00:09:57.426 Firmware Slot Information 00:09:57.426 ========================= 00:09:57.426 Active slot: 1 00:09:57.426 Slot 1 Firmware Revision: 1.0 00:09:57.426 00:09:57.426 00:09:57.426 Commands Supported and Effects 00:09:57.426 ============================== 00:09:57.426 Admin Commands 00:09:57.426 -------------- 00:09:57.426 Delete I/O Submission Queue (00h): Supported 00:09:57.426 Create I/O Submission Queue (01h): Supported 00:09:57.426 Get Log Page (02h): Supported 00:09:57.426 Delete I/O Completion Queue (04h): Supported 00:09:57.426 Create I/O Completion Queue (05h): Supported 00:09:57.426 Identify (06h): Supported 00:09:57.426 Abort (08h): Supported 00:09:57.426 Set Features (09h): Supported 00:09:57.426 Get Features (0Ah): Supported 00:09:57.426 Asynchronous Event Request (0Ch): Supported 00:09:57.426 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:57.426 Directive Send (19h): Supported 00:09:57.426 Directive Receive (1Ah): Supported 00:09:57.426 Virtualization Management (1Ch): Supported 00:09:57.426 Doorbell Buffer Config (7Ch): Supported 00:09:57.426 Format NVM (80h): Supported LBA-Change 00:09:57.426 I/O Commands 00:09:57.426 ------------ 00:09:57.426 Flush (00h): Supported LBA-Change 00:09:57.426 Write (01h): Supported LBA-Change 00:09:57.426 Read (02h): Supported 00:09:57.426 Compare (05h): Supported 00:09:57.426 Write Zeroes (08h): Supported LBA-Change 00:09:57.426 Dataset Management (09h): Supported LBA-Change 00:09:57.426 Unknown (0Ch): Supported 00:09:57.426 Unknown (12h): Supported 00:09:57.426 Copy (19h): Supported LBA-Change 00:09:57.426 Unknown (1Dh): Supported LBA-Change 00:09:57.426 00:09:57.426 Error Log 00:09:57.426 ========= 00:09:57.426 00:09:57.426 Arbitration 00:09:57.426 =========== 00:09:57.426 Arbitration Burst: no limit 00:09:57.426 00:09:57.426 Power Management 00:09:57.426 ================ 00:09:57.426 Number of Power States: 1 00:09:57.426 Current Power State: Power State #0 00:09:57.426 Power State #0: 00:09:57.426 Max Power: 25.00 W 00:09:57.426 Non-Operational State: Operational 00:09:57.426 Entry Latency: 16 microseconds 00:09:57.426 Exit Latency: 4 microseconds 00:09:57.426 Relative Read Throughput: 0 00:09:57.426 Relative Read Latency: 0 00:09:57.426 Relative Write Throughput: 0 00:09:57.426 Relative Write Latency: 0 00:09:57.426 Idle Power: Not Reported 00:09:57.426 Active Power: Not Reported 00:09:57.426 Non-Operational Permissive Mode: Not Supported 00:09:57.426 00:09:57.426 Health Information 00:09:57.426 ================== 00:09:57.426 Critical Warnings: 00:09:57.426 Available Spare Space: OK 00:09:57.426 Temperature: OK 00:09:57.426 Device Reliability: OK 00:09:57.426 Read Only: No 00:09:57.426 Volatile Memory Backup: OK 00:09:57.426 Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.426 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:57.426 Available Spare: 0% 00:09:57.426 Available Spare Threshold: 0% 00:09:57.426 Life Percentage Used: 0% 00:09:57.426 Data Units Read: 2514 00:09:57.426 Data Units Written: 2301 00:09:57.426 Host Read Commands: 93800 00:09:57.426 Host Write Commands: 92069 00:09:57.426 Controller Busy Time: 0 minutes 00:09:57.426 Power Cycles: 0 00:09:57.426 Power On Hours: 0 hours 00:09:57.426 Unsafe Shutdowns: 0 00:09:57.426 Unrecoverable Media Errors: 0 00:09:57.426 Lifetime Error Log Entries: 0 00:09:57.426 Warning Temperature Time: 0 minutes 00:09:57.426 Critical Temperature Time: 0 minutes 00:09:57.426 00:09:57.426 Number of Queues 00:09:57.426 ================ 00:09:57.426 Number of I/O Submission Queues: 64 00:09:57.426 Number of I/O Completion Queues: 64 00:09:57.426 00:09:57.426 ZNS Specific Controller Data 00:09:57.426 ============================ 00:09:57.426 Zone Append Size Limit: 0 00:09:57.426 00:09:57.426 00:09:57.426 Active Namespaces 00:09:57.426 ================= 00:09:57.426 Namespace ID:1 00:09:57.426 Error Recovery Timeout: Unlimited 00:09:57.426 Command Set Identifier: NVM (00h) 00:09:57.426 Deallocate: Supported 00:09:57.426 Deallocated/Unwritten Error: Supported 00:09:57.426 Deallocated Read Value: All 0x00 00:09:57.426 Deallocate in Write Zeroes: Not Supported 00:09:57.426 Deallocated Guard Field: 0xFFFF 00:09:57.426 Flush: Supported 00:09:57.426 Reservation: Not Supported 00:09:57.426 Namespace Sharing Capabilities: Private 00:09:57.426 Size (in LBAs): 1048576 (4GiB) 00:09:57.426 Capacity (in LBAs): 1048576 (4GiB) 00:09:57.427 Utilization (in LBAs): 1048576 (4GiB) 00:09:57.427 Thin Provisioning: Not Supported 00:09:57.427 Per-NS Atomic Units: No 00:09:57.427 Maximum Single Source Range Length: 128 00:09:57.427 Maximum Copy Length: 128 00:09:57.427 Maximum Source Range Count: 128 00:09:57.427 NGUID/EUI64 Never Reused: No 00:09:57.427 Namespace Write Protected: No 00:09:57.427 Number of LBA Formats: 8 00:09:57.427 Current LBA Format: LBA Format #04 00:09:57.427 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.427 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.427 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.427 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.427 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.427 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.427 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.427 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.427 00:09:57.427 NVM Specific Namespace Data 00:09:57.427 =========================== 00:09:57.427 Logical Block Storage Tag Mask: 0 00:09:57.427 Protection Information Capabilities: 00:09:57.427 16b Guard Protection Information Storage Tag Support: No 00:09:57.427 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.427 Storage Tag Check Read Support: No 00:09:57.427 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Namespace ID:2 00:09:57.427 Error Recovery Timeout: Unlimited 00:09:57.427 Command Set Identifier: NVM (00h) 00:09:57.427 Deallocate: Supported 00:09:57.427 Deallocated/Unwritten Error: Supported 00:09:57.427 Deallocated Read Value: All 0x00 00:09:57.427 Deallocate in Write Zeroes: Not Supported 00:09:57.427 Deallocated Guard Field: 0xFFFF 00:09:57.427 Flush: Supported 00:09:57.427 Reservation: Not Supported 00:09:57.427 Namespace Sharing Capabilities: Private 00:09:57.427 Size (in LBAs): 1048576 (4GiB) 00:09:57.427 Capacity (in LBAs): 1048576 (4GiB) 00:09:57.427 Utilization (in LBAs): 1048576 (4GiB) 00:09:57.427 Thin Provisioning: Not Supported 00:09:57.427 Per-NS Atomic Units: No 00:09:57.427 Maximum Single Source Range Length: 128 00:09:57.427 Maximum Copy Length: 128 00:09:57.427 Maximum Source Range Count: 128 00:09:57.427 NGUID/EUI64 Never Reused: No 00:09:57.427 Namespace Write Protected: No 00:09:57.427 Number of LBA Formats: 8 00:09:57.427 Current LBA Format: LBA Format #04 00:09:57.427 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.427 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.427 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.427 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.427 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.427 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.427 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.427 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.427 00:09:57.427 NVM Specific Namespace Data 00:09:57.427 =========================== 00:09:57.427 Logical Block Storage Tag Mask: 0 00:09:57.427 Protection Information Capabilities: 00:09:57.427 16b Guard Protection Information Storage Tag Support: No 00:09:57.427 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.427 Storage Tag Check Read Support: No 00:09:57.427 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Namespace ID:3 00:09:57.427 Error Recovery Timeout: Unlimited 00:09:57.427 Command Set Identifier: NVM (00h) 00:09:57.427 Deallocate: Supported 00:09:57.427 Deallocated/Unwritten Error: Supported 00:09:57.427 Deallocated Read Value: All 0x00 00:09:57.427 Deallocate in Write Zeroes: Not Supported 00:09:57.427 Deallocated Guard Field: 0xFFFF 00:09:57.427 Flush: Supported 00:09:57.427 Reservation: Not Supported 00:09:57.427 Namespace Sharing Capabilities: Private 00:09:57.427 Size (in LBAs): 1048576 (4GiB) 00:09:57.427 Capacity (in LBAs): 1048576 (4GiB) 00:09:57.427 Utilization (in LBAs): 1048576 (4GiB) 00:09:57.427 Thin Provisioning: Not Supported 00:09:57.427 Per-NS Atomic Units: No 00:09:57.427 Maximum Single Source Range Length: 128 00:09:57.427 Maximum Copy Length: 128 00:09:57.427 Maximum Source Range Count: 128 00:09:57.427 NGUID/EUI64 Never Reused: No 00:09:57.427 Namespace Write Protected: No 00:09:57.427 Number of LBA Formats: 8 00:09:57.427 Current LBA Format: LBA Format #04 00:09:57.427 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.427 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.427 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.427 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.427 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.427 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.427 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.427 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.427 00:09:57.427 NVM Specific Namespace Data 00:09:57.427 =========================== 00:09:57.427 Logical Block Storage Tag Mask: 0 00:09:57.427 Protection Information Capabilities: 00:09:57.427 16b Guard Protection Information Storage Tag Support: No 00:09:57.427 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.427 Storage Tag Check Read Support: No 00:09:57.427 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.427 03:54:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:57.427 03:54:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:57.688 ===================================================== 00:09:57.688 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:57.688 ===================================================== 00:09:57.688 Controller Capabilities/Features 00:09:57.688 ================================ 00:09:57.688 Vendor ID: 1b36 00:09:57.688 Subsystem Vendor ID: 1af4 00:09:57.688 Serial Number: 12340 00:09:57.688 Model Number: QEMU NVMe Ctrl 00:09:57.688 Firmware Version: 8.0.0 00:09:57.688 Recommended Arb Burst: 6 00:09:57.688 IEEE OUI Identifier: 00 54 52 00:09:57.688 Multi-path I/O 00:09:57.688 May have multiple subsystem ports: No 00:09:57.688 May have multiple controllers: No 00:09:57.688 Associated with SR-IOV VF: No 00:09:57.688 Max Data Transfer Size: 524288 00:09:57.688 Max Number of Namespaces: 256 00:09:57.688 Max Number of I/O Queues: 64 00:09:57.688 NVMe Specification Version (VS): 1.4 00:09:57.688 NVMe Specification Version (Identify): 1.4 00:09:57.688 Maximum Queue Entries: 2048 00:09:57.688 Contiguous Queues Required: Yes 00:09:57.688 Arbitration Mechanisms Supported 00:09:57.688 Weighted Round Robin: Not Supported 00:09:57.688 Vendor Specific: Not Supported 00:09:57.688 Reset Timeout: 7500 ms 00:09:57.688 Doorbell Stride: 4 bytes 00:09:57.688 NVM Subsystem Reset: Not Supported 00:09:57.688 Command Sets Supported 00:09:57.688 NVM Command Set: Supported 00:09:57.688 Boot Partition: Not Supported 00:09:57.688 Memory Page Size Minimum: 4096 bytes 00:09:57.688 Memory Page Size Maximum: 65536 bytes 00:09:57.688 Persistent Memory Region: Not Supported 00:09:57.688 Optional Asynchronous Events Supported 00:09:57.688 Namespace Attribute Notices: Supported 00:09:57.688 Firmware Activation Notices: Not Supported 00:09:57.688 ANA Change Notices: Not Supported 00:09:57.688 PLE Aggregate Log Change Notices: Not Supported 00:09:57.688 LBA Status Info Alert Notices: Not Supported 00:09:57.688 EGE Aggregate Log Change Notices: Not Supported 00:09:57.688 Normal NVM Subsystem Shutdown event: Not Supported 00:09:57.688 Zone Descriptor Change Notices: Not Supported 00:09:57.688 Discovery Log Change Notices: Not Supported 00:09:57.688 Controller Attributes 00:09:57.688 128-bit Host Identifier: Not Supported 00:09:57.688 Non-Operational Permissive Mode: Not Supported 00:09:57.688 NVM Sets: Not Supported 00:09:57.688 Read Recovery Levels: Not Supported 00:09:57.688 Endurance Groups: Not Supported 00:09:57.688 Predictable Latency Mode: Not Supported 00:09:57.688 Traffic Based Keep ALive: Not Supported 00:09:57.688 Namespace Granularity: Not Supported 00:09:57.688 SQ Associations: Not Supported 00:09:57.688 UUID List: Not Supported 00:09:57.688 Multi-Domain Subsystem: Not Supported 00:09:57.688 Fixed Capacity Management: Not Supported 00:09:57.688 Variable Capacity Management: Not Supported 00:09:57.688 Delete Endurance Group: Not Supported 00:09:57.688 Delete NVM Set: Not Supported 00:09:57.688 Extended LBA Formats Supported: Supported 00:09:57.688 Flexible Data Placement Supported: Not Supported 00:09:57.688 00:09:57.688 Controller Memory Buffer Support 00:09:57.688 ================================ 00:09:57.688 Supported: No 00:09:57.688 00:09:57.688 Persistent Memory Region Support 00:09:57.688 ================================ 00:09:57.688 Supported: No 00:09:57.688 00:09:57.688 Admin Command Set Attributes 00:09:57.688 ============================ 00:09:57.688 Security Send/Receive: Not Supported 00:09:57.688 Format NVM: Supported 00:09:57.688 Firmware Activate/Download: Not Supported 00:09:57.688 Namespace Management: Supported 00:09:57.688 Device Self-Test: Not Supported 00:09:57.688 Directives: Supported 00:09:57.688 NVMe-MI: Not Supported 00:09:57.688 Virtualization Management: Not Supported 00:09:57.688 Doorbell Buffer Config: Supported 00:09:57.688 Get LBA Status Capability: Not Supported 00:09:57.688 Command & Feature Lockdown Capability: Not Supported 00:09:57.688 Abort Command Limit: 4 00:09:57.688 Async Event Request Limit: 4 00:09:57.688 Number of Firmware Slots: N/A 00:09:57.688 Firmware Slot 1 Read-Only: N/A 00:09:57.688 Firmware Activation Without Reset: N/A 00:09:57.688 Multiple Update Detection Support: N/A 00:09:57.688 Firmware Update Granularity: No Information Provided 00:09:57.688 Per-Namespace SMART Log: Yes 00:09:57.688 Asymmetric Namespace Access Log Page: Not Supported 00:09:57.688 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:57.688 Command Effects Log Page: Supported 00:09:57.688 Get Log Page Extended Data: Supported 00:09:57.688 Telemetry Log Pages: Not Supported 00:09:57.688 Persistent Event Log Pages: Not Supported 00:09:57.688 Supported Log Pages Log Page: May Support 00:09:57.688 Commands Supported & Effects Log Page: Not Supported 00:09:57.688 Feature Identifiers & Effects Log Page:May Support 00:09:57.688 NVMe-MI Commands & Effects Log Page: May Support 00:09:57.688 Data Area 4 for Telemetry Log: Not Supported 00:09:57.688 Error Log Page Entries Supported: 1 00:09:57.688 Keep Alive: Not Supported 00:09:57.688 00:09:57.688 NVM Command Set Attributes 00:09:57.688 ========================== 00:09:57.688 Submission Queue Entry Size 00:09:57.688 Max: 64 00:09:57.688 Min: 64 00:09:57.688 Completion Queue Entry Size 00:09:57.688 Max: 16 00:09:57.688 Min: 16 00:09:57.688 Number of Namespaces: 256 00:09:57.688 Compare Command: Supported 00:09:57.688 Write Uncorrectable Command: Not Supported 00:09:57.688 Dataset Management Command: Supported 00:09:57.688 Write Zeroes Command: Supported 00:09:57.688 Set Features Save Field: Supported 00:09:57.688 Reservations: Not Supported 00:09:57.688 Timestamp: Supported 00:09:57.688 Copy: Supported 00:09:57.688 Volatile Write Cache: Present 00:09:57.688 Atomic Write Unit (Normal): 1 00:09:57.688 Atomic Write Unit (PFail): 1 00:09:57.688 Atomic Compare & Write Unit: 1 00:09:57.688 Fused Compare & Write: Not Supported 00:09:57.688 Scatter-Gather List 00:09:57.688 SGL Command Set: Supported 00:09:57.688 SGL Keyed: Not Supported 00:09:57.688 SGL Bit Bucket Descriptor: Not Supported 00:09:57.688 SGL Metadata Pointer: Not Supported 00:09:57.688 Oversized SGL: Not Supported 00:09:57.688 SGL Metadata Address: Not Supported 00:09:57.688 SGL Offset: Not Supported 00:09:57.688 Transport SGL Data Block: Not Supported 00:09:57.688 Replay Protected Memory Block: Not Supported 00:09:57.688 00:09:57.688 Firmware Slot Information 00:09:57.688 ========================= 00:09:57.688 Active slot: 1 00:09:57.689 Slot 1 Firmware Revision: 1.0 00:09:57.689 00:09:57.689 00:09:57.689 Commands Supported and Effects 00:09:57.689 ============================== 00:09:57.689 Admin Commands 00:09:57.689 -------------- 00:09:57.689 Delete I/O Submission Queue (00h): Supported 00:09:57.689 Create I/O Submission Queue (01h): Supported 00:09:57.689 Get Log Page (02h): Supported 00:09:57.689 Delete I/O Completion Queue (04h): Supported 00:09:57.689 Create I/O Completion Queue (05h): Supported 00:09:57.689 Identify (06h): Supported 00:09:57.689 Abort (08h): Supported 00:09:57.689 Set Features (09h): Supported 00:09:57.689 Get Features (0Ah): Supported 00:09:57.689 Asynchronous Event Request (0Ch): Supported 00:09:57.689 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:57.689 Directive Send (19h): Supported 00:09:57.689 Directive Receive (1Ah): Supported 00:09:57.689 Virtualization Management (1Ch): Supported 00:09:57.689 Doorbell Buffer Config (7Ch): Supported 00:09:57.689 Format NVM (80h): Supported LBA-Change 00:09:57.689 I/O Commands 00:09:57.689 ------------ 00:09:57.689 Flush (00h): Supported LBA-Change 00:09:57.689 Write (01h): Supported LBA-Change 00:09:57.689 Read (02h): Supported 00:09:57.689 Compare (05h): Supported 00:09:57.689 Write Zeroes (08h): Supported LBA-Change 00:09:57.689 Dataset Management (09h): Supported LBA-Change 00:09:57.689 Unknown (0Ch): Supported 00:09:57.689 Unknown (12h): Supported 00:09:57.689 Copy (19h): Supported LBA-Change 00:09:57.689 Unknown (1Dh): Supported LBA-Change 00:09:57.689 00:09:57.689 Error Log 00:09:57.689 ========= 00:09:57.689 00:09:57.689 Arbitration 00:09:57.689 =========== 00:09:57.689 Arbitration Burst: no limit 00:09:57.689 00:09:57.689 Power Management 00:09:57.689 ================ 00:09:57.689 Number of Power States: 1 00:09:57.689 Current Power State: Power State #0 00:09:57.689 Power State #0: 00:09:57.689 Max Power: 25.00 W 00:09:57.689 Non-Operational State: Operational 00:09:57.689 Entry Latency: 16 microseconds 00:09:57.689 Exit Latency: 4 microseconds 00:09:57.689 Relative Read Throughput: 0 00:09:57.689 Relative Read Latency: 0 00:09:57.689 Relative Write Throughput: 0 00:09:57.689 Relative Write Latency: 0 00:09:57.949 Idle Power: Not Reported 00:09:57.949 Active Power: Not Reported 00:09:57.949 Non-Operational Permissive Mode: Not Supported 00:09:57.949 00:09:57.949 Health Information 00:09:57.949 ================== 00:09:57.949 Critical Warnings: 00:09:57.949 Available Spare Space: OK 00:09:57.949 Temperature: OK 00:09:57.949 Device Reliability: OK 00:09:57.949 Read Only: No 00:09:57.949 Volatile Memory Backup: OK 00:09:57.949 Current Temperature: 323 Kelvin (50 Celsius) 00:09:57.949 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:57.949 Available Spare: 0% 00:09:57.949 Available Spare Threshold: 0% 00:09:57.949 Life Percentage Used: 0% 00:09:57.949 Data Units Read: 732 00:09:57.949 Data Units Written: 660 00:09:57.949 Host Read Commands: 30160 00:09:57.949 Host Write Commands: 29946 00:09:57.949 Controller Busy Time: 0 minutes 00:09:57.949 Power Cycles: 0 00:09:57.949 Power On Hours: 0 hours 00:09:57.949 Unsafe Shutdowns: 0 00:09:57.949 Unrecoverable Media Errors: 0 00:09:57.949 Lifetime Error Log Entries: 0 00:09:57.949 Warning Temperature Time: 0 minutes 00:09:57.949 Critical Temperature Time: 0 minutes 00:09:57.949 00:09:57.949 Number of Queues 00:09:57.949 ================ 00:09:57.949 Number of I/O Submission Queues: 64 00:09:57.949 Number of I/O Completion Queues: 64 00:09:57.949 00:09:57.949 ZNS Specific Controller Data 00:09:57.949 ============================ 00:09:57.949 Zone Append Size Limit: 0 00:09:57.949 00:09:57.949 00:09:57.949 Active Namespaces 00:09:57.949 ================= 00:09:57.949 Namespace ID:1 00:09:57.949 Error Recovery Timeout: Unlimited 00:09:57.949 Command Set Identifier: NVM (00h) 00:09:57.949 Deallocate: Supported 00:09:57.949 Deallocated/Unwritten Error: Supported 00:09:57.949 Deallocated Read Value: All 0x00 00:09:57.949 Deallocate in Write Zeroes: Not Supported 00:09:57.949 Deallocated Guard Field: 0xFFFF 00:09:57.949 Flush: Supported 00:09:57.949 Reservation: Not Supported 00:09:57.949 Metadata Transferred as: Separate Metadata Buffer 00:09:57.949 Namespace Sharing Capabilities: Private 00:09:57.949 Size (in LBAs): 1548666 (5GiB) 00:09:57.949 Capacity (in LBAs): 1548666 (5GiB) 00:09:57.949 Utilization (in LBAs): 1548666 (5GiB) 00:09:57.949 Thin Provisioning: Not Supported 00:09:57.949 Per-NS Atomic Units: No 00:09:57.949 Maximum Single Source Range Length: 128 00:09:57.949 Maximum Copy Length: 128 00:09:57.949 Maximum Source Range Count: 128 00:09:57.949 NGUID/EUI64 Never Reused: No 00:09:57.949 Namespace Write Protected: No 00:09:57.949 Number of LBA Formats: 8 00:09:57.949 Current LBA Format: LBA Format #07 00:09:57.949 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:57.949 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:57.949 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:57.949 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:57.949 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:57.949 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:57.949 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:57.949 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:57.949 00:09:57.949 NVM Specific Namespace Data 00:09:57.949 =========================== 00:09:57.949 Logical Block Storage Tag Mask: 0 00:09:57.949 Protection Information Capabilities: 00:09:57.950 16b Guard Protection Information Storage Tag Support: No 00:09:57.950 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:57.950 Storage Tag Check Read Support: No 00:09:57.950 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:57.950 03:54:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:57.950 03:54:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:58.211 ===================================================== 00:09:58.211 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:58.211 ===================================================== 00:09:58.211 Controller Capabilities/Features 00:09:58.211 ================================ 00:09:58.211 Vendor ID: 1b36 00:09:58.211 Subsystem Vendor ID: 1af4 00:09:58.211 Serial Number: 12341 00:09:58.211 Model Number: QEMU NVMe Ctrl 00:09:58.211 Firmware Version: 8.0.0 00:09:58.211 Recommended Arb Burst: 6 00:09:58.211 IEEE OUI Identifier: 00 54 52 00:09:58.211 Multi-path I/O 00:09:58.211 May have multiple subsystem ports: No 00:09:58.211 May have multiple controllers: No 00:09:58.211 Associated with SR-IOV VF: No 00:09:58.211 Max Data Transfer Size: 524288 00:09:58.211 Max Number of Namespaces: 256 00:09:58.211 Max Number of I/O Queues: 64 00:09:58.211 NVMe Specification Version (VS): 1.4 00:09:58.211 NVMe Specification Version (Identify): 1.4 00:09:58.211 Maximum Queue Entries: 2048 00:09:58.211 Contiguous Queues Required: Yes 00:09:58.211 Arbitration Mechanisms Supported 00:09:58.211 Weighted Round Robin: Not Supported 00:09:58.211 Vendor Specific: Not Supported 00:09:58.211 Reset Timeout: 7500 ms 00:09:58.211 Doorbell Stride: 4 bytes 00:09:58.211 NVM Subsystem Reset: Not Supported 00:09:58.211 Command Sets Supported 00:09:58.211 NVM Command Set: Supported 00:09:58.211 Boot Partition: Not Supported 00:09:58.211 Memory Page Size Minimum: 4096 bytes 00:09:58.211 Memory Page Size Maximum: 65536 bytes 00:09:58.211 Persistent Memory Region: Not Supported 00:09:58.211 Optional Asynchronous Events Supported 00:09:58.211 Namespace Attribute Notices: Supported 00:09:58.211 Firmware Activation Notices: Not Supported 00:09:58.211 ANA Change Notices: Not Supported 00:09:58.211 PLE Aggregate Log Change Notices: Not Supported 00:09:58.211 LBA Status Info Alert Notices: Not Supported 00:09:58.211 EGE Aggregate Log Change Notices: Not Supported 00:09:58.211 Normal NVM Subsystem Shutdown event: Not Supported 00:09:58.211 Zone Descriptor Change Notices: Not Supported 00:09:58.211 Discovery Log Change Notices: Not Supported 00:09:58.211 Controller Attributes 00:09:58.211 128-bit Host Identifier: Not Supported 00:09:58.211 Non-Operational Permissive Mode: Not Supported 00:09:58.211 NVM Sets: Not Supported 00:09:58.211 Read Recovery Levels: Not Supported 00:09:58.211 Endurance Groups: Not Supported 00:09:58.211 Predictable Latency Mode: Not Supported 00:09:58.211 Traffic Based Keep ALive: Not Supported 00:09:58.211 Namespace Granularity: Not Supported 00:09:58.211 SQ Associations: Not Supported 00:09:58.211 UUID List: Not Supported 00:09:58.211 Multi-Domain Subsystem: Not Supported 00:09:58.211 Fixed Capacity Management: Not Supported 00:09:58.211 Variable Capacity Management: Not Supported 00:09:58.211 Delete Endurance Group: Not Supported 00:09:58.211 Delete NVM Set: Not Supported 00:09:58.211 Extended LBA Formats Supported: Supported 00:09:58.211 Flexible Data Placement Supported: Not Supported 00:09:58.211 00:09:58.211 Controller Memory Buffer Support 00:09:58.211 ================================ 00:09:58.211 Supported: No 00:09:58.211 00:09:58.211 Persistent Memory Region Support 00:09:58.211 ================================ 00:09:58.211 Supported: No 00:09:58.211 00:09:58.211 Admin Command Set Attributes 00:09:58.211 ============================ 00:09:58.211 Security Send/Receive: Not Supported 00:09:58.211 Format NVM: Supported 00:09:58.211 Firmware Activate/Download: Not Supported 00:09:58.211 Namespace Management: Supported 00:09:58.211 Device Self-Test: Not Supported 00:09:58.211 Directives: Supported 00:09:58.211 NVMe-MI: Not Supported 00:09:58.211 Virtualization Management: Not Supported 00:09:58.211 Doorbell Buffer Config: Supported 00:09:58.211 Get LBA Status Capability: Not Supported 00:09:58.211 Command & Feature Lockdown Capability: Not Supported 00:09:58.211 Abort Command Limit: 4 00:09:58.211 Async Event Request Limit: 4 00:09:58.211 Number of Firmware Slots: N/A 00:09:58.211 Firmware Slot 1 Read-Only: N/A 00:09:58.211 Firmware Activation Without Reset: N/A 00:09:58.211 Multiple Update Detection Support: N/A 00:09:58.211 Firmware Update Granularity: No Information Provided 00:09:58.211 Per-Namespace SMART Log: Yes 00:09:58.211 Asymmetric Namespace Access Log Page: Not Supported 00:09:58.211 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:58.211 Command Effects Log Page: Supported 00:09:58.211 Get Log Page Extended Data: Supported 00:09:58.211 Telemetry Log Pages: Not Supported 00:09:58.211 Persistent Event Log Pages: Not Supported 00:09:58.211 Supported Log Pages Log Page: May Support 00:09:58.211 Commands Supported & Effects Log Page: Not Supported 00:09:58.211 Feature Identifiers & Effects Log Page:May Support 00:09:58.211 NVMe-MI Commands & Effects Log Page: May Support 00:09:58.211 Data Area 4 for Telemetry Log: Not Supported 00:09:58.211 Error Log Page Entries Supported: 1 00:09:58.211 Keep Alive: Not Supported 00:09:58.211 00:09:58.212 NVM Command Set Attributes 00:09:58.212 ========================== 00:09:58.212 Submission Queue Entry Size 00:09:58.212 Max: 64 00:09:58.212 Min: 64 00:09:58.212 Completion Queue Entry Size 00:09:58.212 Max: 16 00:09:58.212 Min: 16 00:09:58.212 Number of Namespaces: 256 00:09:58.212 Compare Command: Supported 00:09:58.212 Write Uncorrectable Command: Not Supported 00:09:58.212 Dataset Management Command: Supported 00:09:58.212 Write Zeroes Command: Supported 00:09:58.212 Set Features Save Field: Supported 00:09:58.212 Reservations: Not Supported 00:09:58.212 Timestamp: Supported 00:09:58.212 Copy: Supported 00:09:58.212 Volatile Write Cache: Present 00:09:58.212 Atomic Write Unit (Normal): 1 00:09:58.212 Atomic Write Unit (PFail): 1 00:09:58.212 Atomic Compare & Write Unit: 1 00:09:58.212 Fused Compare & Write: Not Supported 00:09:58.212 Scatter-Gather List 00:09:58.212 SGL Command Set: Supported 00:09:58.212 SGL Keyed: Not Supported 00:09:58.212 SGL Bit Bucket Descriptor: Not Supported 00:09:58.212 SGL Metadata Pointer: Not Supported 00:09:58.212 Oversized SGL: Not Supported 00:09:58.212 SGL Metadata Address: Not Supported 00:09:58.212 SGL Offset: Not Supported 00:09:58.212 Transport SGL Data Block: Not Supported 00:09:58.212 Replay Protected Memory Block: Not Supported 00:09:58.212 00:09:58.212 Firmware Slot Information 00:09:58.212 ========================= 00:09:58.212 Active slot: 1 00:09:58.212 Slot 1 Firmware Revision: 1.0 00:09:58.212 00:09:58.212 00:09:58.212 Commands Supported and Effects 00:09:58.212 ============================== 00:09:58.212 Admin Commands 00:09:58.212 -------------- 00:09:58.212 Delete I/O Submission Queue (00h): Supported 00:09:58.212 Create I/O Submission Queue (01h): Supported 00:09:58.212 Get Log Page (02h): Supported 00:09:58.212 Delete I/O Completion Queue (04h): Supported 00:09:58.212 Create I/O Completion Queue (05h): Supported 00:09:58.212 Identify (06h): Supported 00:09:58.212 Abort (08h): Supported 00:09:58.212 Set Features (09h): Supported 00:09:58.212 Get Features (0Ah): Supported 00:09:58.212 Asynchronous Event Request (0Ch): Supported 00:09:58.212 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:58.212 Directive Send (19h): Supported 00:09:58.212 Directive Receive (1Ah): Supported 00:09:58.212 Virtualization Management (1Ch): Supported 00:09:58.212 Doorbell Buffer Config (7Ch): Supported 00:09:58.212 Format NVM (80h): Supported LBA-Change 00:09:58.212 I/O Commands 00:09:58.212 ------------ 00:09:58.212 Flush (00h): Supported LBA-Change 00:09:58.212 Write (01h): Supported LBA-Change 00:09:58.212 Read (02h): Supported 00:09:58.212 Compare (05h): Supported 00:09:58.212 Write Zeroes (08h): Supported LBA-Change 00:09:58.212 Dataset Management (09h): Supported LBA-Change 00:09:58.212 Unknown (0Ch): Supported 00:09:58.212 Unknown (12h): Supported 00:09:58.212 Copy (19h): Supported LBA-Change 00:09:58.212 Unknown (1Dh): Supported LBA-Change 00:09:58.212 00:09:58.212 Error Log 00:09:58.212 ========= 00:09:58.212 00:09:58.212 Arbitration 00:09:58.212 =========== 00:09:58.212 Arbitration Burst: no limit 00:09:58.212 00:09:58.212 Power Management 00:09:58.212 ================ 00:09:58.212 Number of Power States: 1 00:09:58.212 Current Power State: Power State #0 00:09:58.212 Power State #0: 00:09:58.212 Max Power: 25.00 W 00:09:58.212 Non-Operational State: Operational 00:09:58.212 Entry Latency: 16 microseconds 00:09:58.212 Exit Latency: 4 microseconds 00:09:58.212 Relative Read Throughput: 0 00:09:58.212 Relative Read Latency: 0 00:09:58.212 Relative Write Throughput: 0 00:09:58.212 Relative Write Latency: 0 00:09:58.212 Idle Power: Not Reported 00:09:58.212 Active Power: Not Reported 00:09:58.212 Non-Operational Permissive Mode: Not Supported 00:09:58.212 00:09:58.212 Health Information 00:09:58.212 ================== 00:09:58.212 Critical Warnings: 00:09:58.212 Available Spare Space: OK 00:09:58.212 Temperature: OK 00:09:58.212 Device Reliability: OK 00:09:58.212 Read Only: No 00:09:58.212 Volatile Memory Backup: OK 00:09:58.212 Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.212 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:58.212 Available Spare: 0% 00:09:58.212 Available Spare Threshold: 0% 00:09:58.212 Life Percentage Used: 0% 00:09:58.212 Data Units Read: 1134 00:09:58.212 Data Units Written: 1000 00:09:58.212 Host Read Commands: 45477 00:09:58.212 Host Write Commands: 44241 00:09:58.212 Controller Busy Time: 0 minutes 00:09:58.212 Power Cycles: 0 00:09:58.212 Power On Hours: 0 hours 00:09:58.212 Unsafe Shutdowns: 0 00:09:58.212 Unrecoverable Media Errors: 0 00:09:58.212 Lifetime Error Log Entries: 0 00:09:58.212 Warning Temperature Time: 0 minutes 00:09:58.212 Critical Temperature Time: 0 minutes 00:09:58.212 00:09:58.212 Number of Queues 00:09:58.212 ================ 00:09:58.212 Number of I/O Submission Queues: 64 00:09:58.212 Number of I/O Completion Queues: 64 00:09:58.212 00:09:58.212 ZNS Specific Controller Data 00:09:58.212 ============================ 00:09:58.212 Zone Append Size Limit: 0 00:09:58.212 00:09:58.212 00:09:58.212 Active Namespaces 00:09:58.212 ================= 00:09:58.212 Namespace ID:1 00:09:58.212 Error Recovery Timeout: Unlimited 00:09:58.212 Command Set Identifier: NVM (00h) 00:09:58.212 Deallocate: Supported 00:09:58.212 Deallocated/Unwritten Error: Supported 00:09:58.212 Deallocated Read Value: All 0x00 00:09:58.212 Deallocate in Write Zeroes: Not Supported 00:09:58.212 Deallocated Guard Field: 0xFFFF 00:09:58.212 Flush: Supported 00:09:58.212 Reservation: Not Supported 00:09:58.212 Namespace Sharing Capabilities: Private 00:09:58.212 Size (in LBAs): 1310720 (5GiB) 00:09:58.212 Capacity (in LBAs): 1310720 (5GiB) 00:09:58.212 Utilization (in LBAs): 1310720 (5GiB) 00:09:58.212 Thin Provisioning: Not Supported 00:09:58.212 Per-NS Atomic Units: No 00:09:58.213 Maximum Single Source Range Length: 128 00:09:58.213 Maximum Copy Length: 128 00:09:58.213 Maximum Source Range Count: 128 00:09:58.213 NGUID/EUI64 Never Reused: No 00:09:58.213 Namespace Write Protected: No 00:09:58.213 Number of LBA Formats: 8 00:09:58.213 Current LBA Format: LBA Format #04 00:09:58.213 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:58.213 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:58.213 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:58.213 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:58.213 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:58.213 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:58.213 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:58.213 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:58.213 00:09:58.213 NVM Specific Namespace Data 00:09:58.213 =========================== 00:09:58.213 Logical Block Storage Tag Mask: 0 00:09:58.213 Protection Information Capabilities: 00:09:58.213 16b Guard Protection Information Storage Tag Support: No 00:09:58.213 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:58.213 Storage Tag Check Read Support: No 00:09:58.213 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.213 03:54:40 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:58.213 03:54:40 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:58.474 ===================================================== 00:09:58.474 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:58.474 ===================================================== 00:09:58.474 Controller Capabilities/Features 00:09:58.474 ================================ 00:09:58.474 Vendor ID: 1b36 00:09:58.474 Subsystem Vendor ID: 1af4 00:09:58.474 Serial Number: 12342 00:09:58.474 Model Number: QEMU NVMe Ctrl 00:09:58.474 Firmware Version: 8.0.0 00:09:58.474 Recommended Arb Burst: 6 00:09:58.474 IEEE OUI Identifier: 00 54 52 00:09:58.474 Multi-path I/O 00:09:58.474 May have multiple subsystem ports: No 00:09:58.474 May have multiple controllers: No 00:09:58.474 Associated with SR-IOV VF: No 00:09:58.474 Max Data Transfer Size: 524288 00:09:58.474 Max Number of Namespaces: 256 00:09:58.474 Max Number of I/O Queues: 64 00:09:58.474 NVMe Specification Version (VS): 1.4 00:09:58.474 NVMe Specification Version (Identify): 1.4 00:09:58.474 Maximum Queue Entries: 2048 00:09:58.474 Contiguous Queues Required: Yes 00:09:58.474 Arbitration Mechanisms Supported 00:09:58.474 Weighted Round Robin: Not Supported 00:09:58.474 Vendor Specific: Not Supported 00:09:58.474 Reset Timeout: 7500 ms 00:09:58.474 Doorbell Stride: 4 bytes 00:09:58.474 NVM Subsystem Reset: Not Supported 00:09:58.474 Command Sets Supported 00:09:58.474 NVM Command Set: Supported 00:09:58.474 Boot Partition: Not Supported 00:09:58.474 Memory Page Size Minimum: 4096 bytes 00:09:58.474 Memory Page Size Maximum: 65536 bytes 00:09:58.474 Persistent Memory Region: Not Supported 00:09:58.474 Optional Asynchronous Events Supported 00:09:58.474 Namespace Attribute Notices: Supported 00:09:58.474 Firmware Activation Notices: Not Supported 00:09:58.474 ANA Change Notices: Not Supported 00:09:58.474 PLE Aggregate Log Change Notices: Not Supported 00:09:58.474 LBA Status Info Alert Notices: Not Supported 00:09:58.474 EGE Aggregate Log Change Notices: Not Supported 00:09:58.474 Normal NVM Subsystem Shutdown event: Not Supported 00:09:58.474 Zone Descriptor Change Notices: Not Supported 00:09:58.474 Discovery Log Change Notices: Not Supported 00:09:58.474 Controller Attributes 00:09:58.474 128-bit Host Identifier: Not Supported 00:09:58.474 Non-Operational Permissive Mode: Not Supported 00:09:58.474 NVM Sets: Not Supported 00:09:58.474 Read Recovery Levels: Not Supported 00:09:58.474 Endurance Groups: Not Supported 00:09:58.474 Predictable Latency Mode: Not Supported 00:09:58.474 Traffic Based Keep ALive: Not Supported 00:09:58.474 Namespace Granularity: Not Supported 00:09:58.474 SQ Associations: Not Supported 00:09:58.474 UUID List: Not Supported 00:09:58.474 Multi-Domain Subsystem: Not Supported 00:09:58.474 Fixed Capacity Management: Not Supported 00:09:58.474 Variable Capacity Management: Not Supported 00:09:58.474 Delete Endurance Group: Not Supported 00:09:58.474 Delete NVM Set: Not Supported 00:09:58.474 Extended LBA Formats Supported: Supported 00:09:58.474 Flexible Data Placement Supported: Not Supported 00:09:58.474 00:09:58.474 Controller Memory Buffer Support 00:09:58.474 ================================ 00:09:58.474 Supported: No 00:09:58.474 00:09:58.474 Persistent Memory Region Support 00:09:58.474 ================================ 00:09:58.474 Supported: No 00:09:58.474 00:09:58.474 Admin Command Set Attributes 00:09:58.474 ============================ 00:09:58.474 Security Send/Receive: Not Supported 00:09:58.474 Format NVM: Supported 00:09:58.474 Firmware Activate/Download: Not Supported 00:09:58.474 Namespace Management: Supported 00:09:58.474 Device Self-Test: Not Supported 00:09:58.474 Directives: Supported 00:09:58.474 NVMe-MI: Not Supported 00:09:58.474 Virtualization Management: Not Supported 00:09:58.474 Doorbell Buffer Config: Supported 00:09:58.474 Get LBA Status Capability: Not Supported 00:09:58.474 Command & Feature Lockdown Capability: Not Supported 00:09:58.474 Abort Command Limit: 4 00:09:58.474 Async Event Request Limit: 4 00:09:58.474 Number of Firmware Slots: N/A 00:09:58.474 Firmware Slot 1 Read-Only: N/A 00:09:58.474 Firmware Activation Without Reset: N/A 00:09:58.474 Multiple Update Detection Support: N/A 00:09:58.474 Firmware Update Granularity: No Information Provided 00:09:58.474 Per-Namespace SMART Log: Yes 00:09:58.474 Asymmetric Namespace Access Log Page: Not Supported 00:09:58.474 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:58.474 Command Effects Log Page: Supported 00:09:58.474 Get Log Page Extended Data: Supported 00:09:58.474 Telemetry Log Pages: Not Supported 00:09:58.474 Persistent Event Log Pages: Not Supported 00:09:58.474 Supported Log Pages Log Page: May Support 00:09:58.474 Commands Supported & Effects Log Page: Not Supported 00:09:58.474 Feature Identifiers & Effects Log Page:May Support 00:09:58.474 NVMe-MI Commands & Effects Log Page: May Support 00:09:58.474 Data Area 4 for Telemetry Log: Not Supported 00:09:58.474 Error Log Page Entries Supported: 1 00:09:58.474 Keep Alive: Not Supported 00:09:58.474 00:09:58.474 NVM Command Set Attributes 00:09:58.474 ========================== 00:09:58.474 Submission Queue Entry Size 00:09:58.474 Max: 64 00:09:58.474 Min: 64 00:09:58.474 Completion Queue Entry Size 00:09:58.474 Max: 16 00:09:58.474 Min: 16 00:09:58.474 Number of Namespaces: 256 00:09:58.474 Compare Command: Supported 00:09:58.474 Write Uncorrectable Command: Not Supported 00:09:58.474 Dataset Management Command: Supported 00:09:58.474 Write Zeroes Command: Supported 00:09:58.474 Set Features Save Field: Supported 00:09:58.474 Reservations: Not Supported 00:09:58.474 Timestamp: Supported 00:09:58.474 Copy: Supported 00:09:58.475 Volatile Write Cache: Present 00:09:58.475 Atomic Write Unit (Normal): 1 00:09:58.475 Atomic Write Unit (PFail): 1 00:09:58.475 Atomic Compare & Write Unit: 1 00:09:58.475 Fused Compare & Write: Not Supported 00:09:58.475 Scatter-Gather List 00:09:58.475 SGL Command Set: Supported 00:09:58.475 SGL Keyed: Not Supported 00:09:58.475 SGL Bit Bucket Descriptor: Not Supported 00:09:58.475 SGL Metadata Pointer: Not Supported 00:09:58.475 Oversized SGL: Not Supported 00:09:58.475 SGL Metadata Address: Not Supported 00:09:58.475 SGL Offset: Not Supported 00:09:58.475 Transport SGL Data Block: Not Supported 00:09:58.475 Replay Protected Memory Block: Not Supported 00:09:58.475 00:09:58.475 Firmware Slot Information 00:09:58.475 ========================= 00:09:58.475 Active slot: 1 00:09:58.475 Slot 1 Firmware Revision: 1.0 00:09:58.475 00:09:58.475 00:09:58.475 Commands Supported and Effects 00:09:58.475 ============================== 00:09:58.475 Admin Commands 00:09:58.475 -------------- 00:09:58.475 Delete I/O Submission Queue (00h): Supported 00:09:58.475 Create I/O Submission Queue (01h): Supported 00:09:58.475 Get Log Page (02h): Supported 00:09:58.475 Delete I/O Completion Queue (04h): Supported 00:09:58.475 Create I/O Completion Queue (05h): Supported 00:09:58.475 Identify (06h): Supported 00:09:58.475 Abort (08h): Supported 00:09:58.475 Set Features (09h): Supported 00:09:58.475 Get Features (0Ah): Supported 00:09:58.475 Asynchronous Event Request (0Ch): Supported 00:09:58.475 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:58.475 Directive Send (19h): Supported 00:09:58.475 Directive Receive (1Ah): Supported 00:09:58.475 Virtualization Management (1Ch): Supported 00:09:58.475 Doorbell Buffer Config (7Ch): Supported 00:09:58.475 Format NVM (80h): Supported LBA-Change 00:09:58.475 I/O Commands 00:09:58.475 ------------ 00:09:58.475 Flush (00h): Supported LBA-Change 00:09:58.475 Write (01h): Supported LBA-Change 00:09:58.475 Read (02h): Supported 00:09:58.475 Compare (05h): Supported 00:09:58.475 Write Zeroes (08h): Supported LBA-Change 00:09:58.475 Dataset Management (09h): Supported LBA-Change 00:09:58.475 Unknown (0Ch): Supported 00:09:58.475 Unknown (12h): Supported 00:09:58.475 Copy (19h): Supported LBA-Change 00:09:58.475 Unknown (1Dh): Supported LBA-Change 00:09:58.475 00:09:58.475 Error Log 00:09:58.475 ========= 00:09:58.475 00:09:58.475 Arbitration 00:09:58.475 =========== 00:09:58.475 Arbitration Burst: no limit 00:09:58.475 00:09:58.475 Power Management 00:09:58.475 ================ 00:09:58.475 Number of Power States: 1 00:09:58.475 Current Power State: Power State #0 00:09:58.475 Power State #0: 00:09:58.475 Max Power: 25.00 W 00:09:58.475 Non-Operational State: Operational 00:09:58.475 Entry Latency: 16 microseconds 00:09:58.475 Exit Latency: 4 microseconds 00:09:58.475 Relative Read Throughput: 0 00:09:58.475 Relative Read Latency: 0 00:09:58.475 Relative Write Throughput: 0 00:09:58.475 Relative Write Latency: 0 00:09:58.475 Idle Power: Not Reported 00:09:58.475 Active Power: Not Reported 00:09:58.475 Non-Operational Permissive Mode: Not Supported 00:09:58.475 00:09:58.475 Health Information 00:09:58.475 ================== 00:09:58.475 Critical Warnings: 00:09:58.475 Available Spare Space: OK 00:09:58.475 Temperature: OK 00:09:58.475 Device Reliability: OK 00:09:58.475 Read Only: No 00:09:58.475 Volatile Memory Backup: OK 00:09:58.475 Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.475 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:58.475 Available Spare: 0% 00:09:58.475 Available Spare Threshold: 0% 00:09:58.475 Life Percentage Used: 0% 00:09:58.475 Data Units Read: 2514 00:09:58.475 Data Units Written: 2301 00:09:58.475 Host Read Commands: 93800 00:09:58.475 Host Write Commands: 92069 00:09:58.475 Controller Busy Time: 0 minutes 00:09:58.475 Power Cycles: 0 00:09:58.475 Power On Hours: 0 hours 00:09:58.475 Unsafe Shutdowns: 0 00:09:58.475 Unrecoverable Media Errors: 0 00:09:58.475 Lifetime Error Log Entries: 0 00:09:58.475 Warning Temperature Time: 0 minutes 00:09:58.475 Critical Temperature Time: 0 minutes 00:09:58.475 00:09:58.475 Number of Queues 00:09:58.475 ================ 00:09:58.475 Number of I/O Submission Queues: 64 00:09:58.475 Number of I/O Completion Queues: 64 00:09:58.475 00:09:58.475 ZNS Specific Controller Data 00:09:58.475 ============================ 00:09:58.475 Zone Append Size Limit: 0 00:09:58.475 00:09:58.475 00:09:58.475 Active Namespaces 00:09:58.475 ================= 00:09:58.475 Namespace ID:1 00:09:58.475 Error Recovery Timeout: Unlimited 00:09:58.475 Command Set Identifier: NVM (00h) 00:09:58.475 Deallocate: Supported 00:09:58.475 Deallocated/Unwritten Error: Supported 00:09:58.475 Deallocated Read Value: All 0x00 00:09:58.475 Deallocate in Write Zeroes: Not Supported 00:09:58.475 Deallocated Guard Field: 0xFFFF 00:09:58.475 Flush: Supported 00:09:58.475 Reservation: Not Supported 00:09:58.475 Namespace Sharing Capabilities: Private 00:09:58.475 Size (in LBAs): 1048576 (4GiB) 00:09:58.475 Capacity (in LBAs): 1048576 (4GiB) 00:09:58.475 Utilization (in LBAs): 1048576 (4GiB) 00:09:58.475 Thin Provisioning: Not Supported 00:09:58.475 Per-NS Atomic Units: No 00:09:58.475 Maximum Single Source Range Length: 128 00:09:58.475 Maximum Copy Length: 128 00:09:58.475 Maximum Source Range Count: 128 00:09:58.475 NGUID/EUI64 Never Reused: No 00:09:58.475 Namespace Write Protected: No 00:09:58.475 Number of LBA Formats: 8 00:09:58.475 Current LBA Format: LBA Format #04 00:09:58.475 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:58.475 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:58.475 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:58.475 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:58.475 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:58.475 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:58.476 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:58.476 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:58.476 00:09:58.476 NVM Specific Namespace Data 00:09:58.476 =========================== 00:09:58.476 Logical Block Storage Tag Mask: 0 00:09:58.476 Protection Information Capabilities: 00:09:58.476 16b Guard Protection Information Storage Tag Support: No 00:09:58.476 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:58.476 Storage Tag Check Read Support: No 00:09:58.476 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Namespace ID:2 00:09:58.476 Error Recovery Timeout: Unlimited 00:09:58.476 Command Set Identifier: NVM (00h) 00:09:58.476 Deallocate: Supported 00:09:58.476 Deallocated/Unwritten Error: Supported 00:09:58.476 Deallocated Read Value: All 0x00 00:09:58.476 Deallocate in Write Zeroes: Not Supported 00:09:58.476 Deallocated Guard Field: 0xFFFF 00:09:58.476 Flush: Supported 00:09:58.476 Reservation: Not Supported 00:09:58.476 Namespace Sharing Capabilities: Private 00:09:58.476 Size (in LBAs): 1048576 (4GiB) 00:09:58.476 Capacity (in LBAs): 1048576 (4GiB) 00:09:58.476 Utilization (in LBAs): 1048576 (4GiB) 00:09:58.476 Thin Provisioning: Not Supported 00:09:58.476 Per-NS Atomic Units: No 00:09:58.476 Maximum Single Source Range Length: 128 00:09:58.476 Maximum Copy Length: 128 00:09:58.476 Maximum Source Range Count: 128 00:09:58.476 NGUID/EUI64 Never Reused: No 00:09:58.476 Namespace Write Protected: No 00:09:58.476 Number of LBA Formats: 8 00:09:58.476 Current LBA Format: LBA Format #04 00:09:58.476 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:58.476 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:58.476 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:58.476 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:58.476 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:58.476 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:58.476 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:58.476 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:58.476 00:09:58.476 NVM Specific Namespace Data 00:09:58.476 =========================== 00:09:58.476 Logical Block Storage Tag Mask: 0 00:09:58.476 Protection Information Capabilities: 00:09:58.476 16b Guard Protection Information Storage Tag Support: No 00:09:58.476 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:58.476 Storage Tag Check Read Support: No 00:09:58.476 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Namespace ID:3 00:09:58.476 Error Recovery Timeout: Unlimited 00:09:58.476 Command Set Identifier: NVM (00h) 00:09:58.476 Deallocate: Supported 00:09:58.476 Deallocated/Unwritten Error: Supported 00:09:58.476 Deallocated Read Value: All 0x00 00:09:58.476 Deallocate in Write Zeroes: Not Supported 00:09:58.476 Deallocated Guard Field: 0xFFFF 00:09:58.476 Flush: Supported 00:09:58.476 Reservation: Not Supported 00:09:58.476 Namespace Sharing Capabilities: Private 00:09:58.476 Size (in LBAs): 1048576 (4GiB) 00:09:58.476 Capacity (in LBAs): 1048576 (4GiB) 00:09:58.476 Utilization (in LBAs): 1048576 (4GiB) 00:09:58.476 Thin Provisioning: Not Supported 00:09:58.476 Per-NS Atomic Units: No 00:09:58.476 Maximum Single Source Range Length: 128 00:09:58.476 Maximum Copy Length: 128 00:09:58.476 Maximum Source Range Count: 128 00:09:58.476 NGUID/EUI64 Never Reused: No 00:09:58.476 Namespace Write Protected: No 00:09:58.476 Number of LBA Formats: 8 00:09:58.476 Current LBA Format: LBA Format #04 00:09:58.476 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:58.476 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:58.476 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:58.476 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:58.476 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:58.476 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:58.476 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:58.476 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:58.476 00:09:58.476 NVM Specific Namespace Data 00:09:58.476 =========================== 00:09:58.476 Logical Block Storage Tag Mask: 0 00:09:58.476 Protection Information Capabilities: 00:09:58.476 16b Guard Protection Information Storage Tag Support: No 00:09:58.476 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:58.476 Storage Tag Check Read Support: No 00:09:58.476 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.476 03:54:41 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:58.476 03:54:41 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:58.738 ===================================================== 00:09:58.738 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:58.738 ===================================================== 00:09:58.738 Controller Capabilities/Features 00:09:58.738 ================================ 00:09:58.738 Vendor ID: 1b36 00:09:58.738 Subsystem Vendor ID: 1af4 00:09:58.738 Serial Number: 12343 00:09:58.738 Model Number: QEMU NVMe Ctrl 00:09:58.738 Firmware Version: 8.0.0 00:09:58.738 Recommended Arb Burst: 6 00:09:58.738 IEEE OUI Identifier: 00 54 52 00:09:58.738 Multi-path I/O 00:09:58.738 May have multiple subsystem ports: No 00:09:58.738 May have multiple controllers: Yes 00:09:58.738 Associated with SR-IOV VF: No 00:09:58.738 Max Data Transfer Size: 524288 00:09:58.738 Max Number of Namespaces: 256 00:09:58.738 Max Number of I/O Queues: 64 00:09:58.738 NVMe Specification Version (VS): 1.4 00:09:58.738 NVMe Specification Version (Identify): 1.4 00:09:58.738 Maximum Queue Entries: 2048 00:09:58.738 Contiguous Queues Required: Yes 00:09:58.738 Arbitration Mechanisms Supported 00:09:58.738 Weighted Round Robin: Not Supported 00:09:58.738 Vendor Specific: Not Supported 00:09:58.738 Reset Timeout: 7500 ms 00:09:58.738 Doorbell Stride: 4 bytes 00:09:58.738 NVM Subsystem Reset: Not Supported 00:09:58.738 Command Sets Supported 00:09:58.738 NVM Command Set: Supported 00:09:58.738 Boot Partition: Not Supported 00:09:58.738 Memory Page Size Minimum: 4096 bytes 00:09:58.738 Memory Page Size Maximum: 65536 bytes 00:09:58.738 Persistent Memory Region: Not Supported 00:09:58.738 Optional Asynchronous Events Supported 00:09:58.738 Namespace Attribute Notices: Supported 00:09:58.738 Firmware Activation Notices: Not Supported 00:09:58.738 ANA Change Notices: Not Supported 00:09:58.738 PLE Aggregate Log Change Notices: Not Supported 00:09:58.738 LBA Status Info Alert Notices: Not Supported 00:09:58.738 EGE Aggregate Log Change Notices: Not Supported 00:09:58.738 Normal NVM Subsystem Shutdown event: Not Supported 00:09:58.738 Zone Descriptor Change Notices: Not Supported 00:09:58.738 Discovery Log Change Notices: Not Supported 00:09:58.738 Controller Attributes 00:09:58.738 128-bit Host Identifier: Not Supported 00:09:58.738 Non-Operational Permissive Mode: Not Supported 00:09:58.738 NVM Sets: Not Supported 00:09:58.738 Read Recovery Levels: Not Supported 00:09:58.738 Endurance Groups: Supported 00:09:58.738 Predictable Latency Mode: Not Supported 00:09:58.738 Traffic Based Keep ALive: Not Supported 00:09:58.738 Namespace Granularity: Not Supported 00:09:58.738 SQ Associations: Not Supported 00:09:58.738 UUID List: Not Supported 00:09:58.738 Multi-Domain Subsystem: Not Supported 00:09:58.738 Fixed Capacity Management: Not Supported 00:09:58.738 Variable Capacity Management: Not Supported 00:09:58.738 Delete Endurance Group: Not Supported 00:09:58.738 Delete NVM Set: Not Supported 00:09:58.738 Extended LBA Formats Supported: Supported 00:09:58.738 Flexible Data Placement Supported: Supported 00:09:58.738 00:09:58.738 Controller Memory Buffer Support 00:09:58.738 ================================ 00:09:58.738 Supported: No 00:09:58.738 00:09:58.738 Persistent Memory Region Support 00:09:58.738 ================================ 00:09:58.738 Supported: No 00:09:58.738 00:09:58.738 Admin Command Set Attributes 00:09:58.738 ============================ 00:09:58.738 Security Send/Receive: Not Supported 00:09:58.738 Format NVM: Supported 00:09:58.738 Firmware Activate/Download: Not Supported 00:09:58.738 Namespace Management: Supported 00:09:58.738 Device Self-Test: Not Supported 00:09:58.738 Directives: Supported 00:09:58.738 NVMe-MI: Not Supported 00:09:58.738 Virtualization Management: Not Supported 00:09:58.738 Doorbell Buffer Config: Supported 00:09:58.738 Get LBA Status Capability: Not Supported 00:09:58.738 Command & Feature Lockdown Capability: Not Supported 00:09:58.738 Abort Command Limit: 4 00:09:58.738 Async Event Request Limit: 4 00:09:58.738 Number of Firmware Slots: N/A 00:09:58.738 Firmware Slot 1 Read-Only: N/A 00:09:58.738 Firmware Activation Without Reset: N/A 00:09:58.738 Multiple Update Detection Support: N/A 00:09:58.738 Firmware Update Granularity: No Information Provided 00:09:58.738 Per-Namespace SMART Log: Yes 00:09:58.738 Asymmetric Namespace Access Log Page: Not Supported 00:09:58.738 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:58.738 Command Effects Log Page: Supported 00:09:58.738 Get Log Page Extended Data: Supported 00:09:58.738 Telemetry Log Pages: Not Supported 00:09:58.738 Persistent Event Log Pages: Not Supported 00:09:58.738 Supported Log Pages Log Page: May Support 00:09:58.738 Commands Supported & Effects Log Page: Not Supported 00:09:58.738 Feature Identifiers & Effects Log Page:May Support 00:09:58.738 NVMe-MI Commands & Effects Log Page: May Support 00:09:58.738 Data Area 4 for Telemetry Log: Not Supported 00:09:58.738 Error Log Page Entries Supported: 1 00:09:58.738 Keep Alive: Not Supported 00:09:58.738 00:09:58.738 NVM Command Set Attributes 00:09:58.738 ========================== 00:09:58.738 Submission Queue Entry Size 00:09:58.738 Max: 64 00:09:58.738 Min: 64 00:09:58.738 Completion Queue Entry Size 00:09:58.738 Max: 16 00:09:58.738 Min: 16 00:09:58.738 Number of Namespaces: 256 00:09:58.738 Compare Command: Supported 00:09:58.738 Write Uncorrectable Command: Not Supported 00:09:58.738 Dataset Management Command: Supported 00:09:58.738 Write Zeroes Command: Supported 00:09:58.738 Set Features Save Field: Supported 00:09:58.738 Reservations: Not Supported 00:09:58.738 Timestamp: Supported 00:09:58.738 Copy: Supported 00:09:58.738 Volatile Write Cache: Present 00:09:58.738 Atomic Write Unit (Normal): 1 00:09:58.738 Atomic Write Unit (PFail): 1 00:09:58.738 Atomic Compare & Write Unit: 1 00:09:58.738 Fused Compare & Write: Not Supported 00:09:58.738 Scatter-Gather List 00:09:58.738 SGL Command Set: Supported 00:09:58.738 SGL Keyed: Not Supported 00:09:58.738 SGL Bit Bucket Descriptor: Not Supported 00:09:58.738 SGL Metadata Pointer: Not Supported 00:09:58.738 Oversized SGL: Not Supported 00:09:58.738 SGL Metadata Address: Not Supported 00:09:58.738 SGL Offset: Not Supported 00:09:58.738 Transport SGL Data Block: Not Supported 00:09:58.738 Replay Protected Memory Block: Not Supported 00:09:58.738 00:09:58.738 Firmware Slot Information 00:09:58.738 ========================= 00:09:58.738 Active slot: 1 00:09:58.738 Slot 1 Firmware Revision: 1.0 00:09:58.738 00:09:58.738 00:09:58.738 Commands Supported and Effects 00:09:58.738 ============================== 00:09:58.738 Admin Commands 00:09:58.738 -------------- 00:09:58.738 Delete I/O Submission Queue (00h): Supported 00:09:58.739 Create I/O Submission Queue (01h): Supported 00:09:58.739 Get Log Page (02h): Supported 00:09:58.739 Delete I/O Completion Queue (04h): Supported 00:09:58.739 Create I/O Completion Queue (05h): Supported 00:09:58.739 Identify (06h): Supported 00:09:58.739 Abort (08h): Supported 00:09:58.739 Set Features (09h): Supported 00:09:58.739 Get Features (0Ah): Supported 00:09:58.739 Asynchronous Event Request (0Ch): Supported 00:09:58.739 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:58.739 Directive Send (19h): Supported 00:09:58.739 Directive Receive (1Ah): Supported 00:09:58.739 Virtualization Management (1Ch): Supported 00:09:58.739 Doorbell Buffer Config (7Ch): Supported 00:09:58.739 Format NVM (80h): Supported LBA-Change 00:09:58.739 I/O Commands 00:09:58.739 ------------ 00:09:58.739 Flush (00h): Supported LBA-Change 00:09:58.739 Write (01h): Supported LBA-Change 00:09:58.739 Read (02h): Supported 00:09:58.739 Compare (05h): Supported 00:09:58.739 Write Zeroes (08h): Supported LBA-Change 00:09:58.739 Dataset Management (09h): Supported LBA-Change 00:09:58.739 Unknown (0Ch): Supported 00:09:58.739 Unknown (12h): Supported 00:09:58.739 Copy (19h): Supported LBA-Change 00:09:58.739 Unknown (1Dh): Supported LBA-Change 00:09:58.739 00:09:58.739 Error Log 00:09:58.739 ========= 00:09:58.739 00:09:58.739 Arbitration 00:09:58.739 =========== 00:09:58.739 Arbitration Burst: no limit 00:09:58.739 00:09:58.739 Power Management 00:09:58.739 ================ 00:09:58.739 Number of Power States: 1 00:09:58.739 Current Power State: Power State #0 00:09:58.739 Power State #0: 00:09:58.739 Max Power: 25.00 W 00:09:58.739 Non-Operational State: Operational 00:09:58.739 Entry Latency: 16 microseconds 00:09:58.739 Exit Latency: 4 microseconds 00:09:58.739 Relative Read Throughput: 0 00:09:58.739 Relative Read Latency: 0 00:09:58.739 Relative Write Throughput: 0 00:09:58.739 Relative Write Latency: 0 00:09:58.739 Idle Power: Not Reported 00:09:58.739 Active Power: Not Reported 00:09:58.739 Non-Operational Permissive Mode: Not Supported 00:09:58.739 00:09:58.739 Health Information 00:09:58.739 ================== 00:09:58.739 Critical Warnings: 00:09:58.739 Available Spare Space: OK 00:09:58.739 Temperature: OK 00:09:58.739 Device Reliability: OK 00:09:58.739 Read Only: No 00:09:58.739 Volatile Memory Backup: OK 00:09:58.739 Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.739 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:58.739 Available Spare: 0% 00:09:58.739 Available Spare Threshold: 0% 00:09:58.739 Life Percentage Used: 0% 00:09:58.739 Data Units Read: 1062 00:09:58.739 Data Units Written: 991 00:09:58.739 Host Read Commands: 33097 00:09:58.739 Host Write Commands: 32520 00:09:58.739 Controller Busy Time: 0 minutes 00:09:58.739 Power Cycles: 0 00:09:58.739 Power On Hours: 0 hours 00:09:58.739 Unsafe Shutdowns: 0 00:09:58.739 Unrecoverable Media Errors: 0 00:09:58.739 Lifetime Error Log Entries: 0 00:09:58.739 Warning Temperature Time: 0 minutes 00:09:58.739 Critical Temperature Time: 0 minutes 00:09:58.739 00:09:58.739 Number of Queues 00:09:58.739 ================ 00:09:58.739 Number of I/O Submission Queues: 64 00:09:58.739 Number of I/O Completion Queues: 64 00:09:58.739 00:09:58.739 ZNS Specific Controller Data 00:09:58.739 ============================ 00:09:58.739 Zone Append Size Limit: 0 00:09:58.739 00:09:58.739 00:09:58.739 Active Namespaces 00:09:58.739 ================= 00:09:58.739 Namespace ID:1 00:09:58.739 Error Recovery Timeout: Unlimited 00:09:58.739 Command Set Identifier: NVM (00h) 00:09:58.739 Deallocate: Supported 00:09:58.739 Deallocated/Unwritten Error: Supported 00:09:58.739 Deallocated Read Value: All 0x00 00:09:58.739 Deallocate in Write Zeroes: Not Supported 00:09:58.739 Deallocated Guard Field: 0xFFFF 00:09:58.739 Flush: Supported 00:09:58.739 Reservation: Not Supported 00:09:58.739 Namespace Sharing Capabilities: Multiple Controllers 00:09:58.739 Size (in LBAs): 262144 (1GiB) 00:09:58.739 Capacity (in LBAs): 262144 (1GiB) 00:09:58.739 Utilization (in LBAs): 262144 (1GiB) 00:09:58.739 Thin Provisioning: Not Supported 00:09:58.739 Per-NS Atomic Units: No 00:09:58.739 Maximum Single Source Range Length: 128 00:09:58.739 Maximum Copy Length: 128 00:09:58.739 Maximum Source Range Count: 128 00:09:58.739 NGUID/EUI64 Never Reused: No 00:09:58.739 Namespace Write Protected: No 00:09:58.739 Endurance group ID: 1 00:09:58.739 Number of LBA Formats: 8 00:09:58.739 Current LBA Format: LBA Format #04 00:09:58.739 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:58.739 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:58.739 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:58.739 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:58.739 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:58.739 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:58.739 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:58.739 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:58.739 00:09:58.739 Get Feature FDP: 00:09:58.739 ================ 00:09:58.739 Enabled: Yes 00:09:58.739 FDP configuration index: 0 00:09:58.739 00:09:58.739 FDP configurations log page 00:09:58.739 =========================== 00:09:58.739 Number of FDP configurations: 1 00:09:58.739 Version: 0 00:09:58.739 Size: 112 00:09:58.739 FDP Configuration Descriptor: 0 00:09:58.739 Descriptor Size: 96 00:09:58.739 Reclaim Group Identifier format: 2 00:09:58.739 FDP Volatile Write Cache: Not Present 00:09:58.739 FDP Configuration: Valid 00:09:58.739 Vendor Specific Size: 0 00:09:58.739 Number of Reclaim Groups: 2 00:09:58.739 Number of Recalim Unit Handles: 8 00:09:58.739 Max Placement Identifiers: 128 00:09:58.739 Number of Namespaces Suppprted: 256 00:09:58.739 Reclaim unit Nominal Size: 6000000 bytes 00:09:58.739 Estimated Reclaim Unit Time Limit: Not Reported 00:09:58.739 RUH Desc #000: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #001: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #002: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #003: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #004: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #005: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #006: RUH Type: Initially Isolated 00:09:58.739 RUH Desc #007: RUH Type: Initially Isolated 00:09:58.739 00:09:58.739 FDP reclaim unit handle usage log page 00:09:58.739 ====================================== 00:09:58.739 Number of Reclaim Unit Handles: 8 00:09:58.739 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:58.739 RUH Usage Desc #001: RUH Attributes: Unused 00:09:58.739 RUH Usage Desc #002: RUH Attributes: Unused 00:09:58.739 RUH Usage Desc #003: RUH Attributes: Unused 00:09:58.739 RUH Usage Desc #004: RUH Attributes: Unused 00:09:58.739 RUH Usage Desc #005: RUH Attributes: Unused 00:09:58.739 RUH Usage Desc #006: RUH Attributes: Unused 00:09:58.739 RUH Usage Desc #007: RUH Attributes: Unused 00:09:58.739 00:09:58.739 FDP statistics log page 00:09:58.739 ======================= 00:09:58.739 Host bytes with metadata written: 618504192 00:09:58.739 Media bytes with metadata written: 618586112 00:09:58.739 Media bytes erased: 0 00:09:58.739 00:09:58.739 FDP events log page 00:09:58.739 =================== 00:09:58.739 Number of FDP events: 0 00:09:58.739 00:09:58.739 NVM Specific Namespace Data 00:09:58.739 =========================== 00:09:58.739 Logical Block Storage Tag Mask: 0 00:09:58.739 Protection Information Capabilities: 00:09:58.739 16b Guard Protection Information Storage Tag Support: No 00:09:58.739 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:58.739 Storage Tag Check Read Support: No 00:09:58.739 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:58.739 00:09:58.739 real 0m1.729s 00:09:58.739 user 0m0.653s 00:09:58.739 sys 0m0.897s 00:09:58.739 03:54:41 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.739 ************************************ 00:09:58.739 END TEST nvme_identify 00:09:58.739 03:54:41 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:58.739 ************************************ 00:09:59.000 03:54:41 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:59.000 03:54:41 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.000 03:54:41 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.000 03:54:41 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:59.000 ************************************ 00:09:59.000 START TEST nvme_perf 00:09:59.000 ************************************ 00:09:59.000 03:54:41 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:09:59.000 03:54:41 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:10:00.381 Initializing NVMe Controllers 00:10:00.381 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:00.381 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:00.381 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:00.381 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:00.381 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:00.381 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:00.381 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:00.381 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:00.381 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:00.381 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:00.381 Initialization complete. Launching workers. 00:10:00.381 ======================================================== 00:10:00.381 Latency(us) 00:10:00.381 Device Information : IOPS MiB/s Average min max 00:10:00.381 PCIE (0000:00:10.0) NSID 1 from core 0: 14087.26 165.09 9105.07 7836.94 49382.04 00:10:00.381 PCIE (0000:00:11.0) NSID 1 from core 0: 14087.26 165.09 9091.82 7938.98 47512.57 00:10:00.381 PCIE (0000:00:13.0) NSID 1 from core 0: 14087.26 165.09 9077.38 7948.60 46373.48 00:10:00.381 PCIE (0000:00:12.0) NSID 1 from core 0: 14087.26 165.09 9062.00 8004.09 44425.78 00:10:00.381 PCIE (0000:00:12.0) NSID 2 from core 0: 14087.26 165.09 9047.46 7964.11 42490.13 00:10:00.381 PCIE (0000:00:12.0) NSID 3 from core 0: 14151.00 165.83 8992.11 7975.16 35258.65 00:10:00.381 ======================================================== 00:10:00.381 Total : 84587.28 991.26 9062.59 7836.94 49382.04 00:10:00.381 00:10:00.381 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:00.381 ================================================================================= 00:10:00.381 1.00000% : 8053.822us 00:10:00.381 10.00000% : 8317.018us 00:10:00.381 25.00000% : 8527.576us 00:10:00.381 50.00000% : 8790.773us 00:10:00.381 75.00000% : 9053.969us 00:10:00.381 90.00000% : 9317.166us 00:10:00.381 95.00000% : 9527.724us 00:10:00.381 98.00000% : 10106.757us 00:10:00.381 99.00000% : 11422.741us 00:10:00.381 99.50000% : 41900.929us 00:10:00.382 99.90000% : 49059.881us 00:10:00.382 99.99000% : 49480.996us 00:10:00.382 99.99900% : 49480.996us 00:10:00.382 99.99990% : 49480.996us 00:10:00.382 99.99999% : 49480.996us 00:10:00.382 00:10:00.382 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:00.382 ================================================================================= 00:10:00.382 1.00000% : 8159.100us 00:10:00.382 10.00000% : 8369.658us 00:10:00.382 25.00000% : 8527.576us 00:10:00.382 50.00000% : 8790.773us 00:10:00.382 75.00000% : 9001.330us 00:10:00.382 90.00000% : 9264.527us 00:10:00.382 95.00000% : 9475.084us 00:10:00.382 98.00000% : 10001.478us 00:10:00.382 99.00000% : 11633.298us 00:10:00.382 99.50000% : 40216.469us 00:10:00.382 99.90000% : 47375.422us 00:10:00.382 99.99000% : 47585.979us 00:10:00.382 99.99900% : 47585.979us 00:10:00.382 99.99990% : 47585.979us 00:10:00.382 99.99999% : 47585.979us 00:10:00.382 00:10:00.382 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:00.382 ================================================================================= 00:10:00.382 1.00000% : 8159.100us 00:10:00.382 10.00000% : 8369.658us 00:10:00.382 25.00000% : 8527.576us 00:10:00.382 50.00000% : 8790.773us 00:10:00.382 75.00000% : 9001.330us 00:10:00.382 90.00000% : 9264.527us 00:10:00.382 95.00000% : 9475.084us 00:10:00.382 98.00000% : 9948.839us 00:10:00.382 99.00000% : 11475.380us 00:10:00.382 99.50000% : 39163.682us 00:10:00.382 99.90000% : 46112.077us 00:10:00.382 99.99000% : 46533.192us 00:10:00.382 99.99900% : 46533.192us 00:10:00.382 99.99990% : 46533.192us 00:10:00.382 99.99999% : 46533.192us 00:10:00.382 00:10:00.382 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:00.382 ================================================================================= 00:10:00.382 1.00000% : 8159.100us 00:10:00.382 10.00000% : 8369.658us 00:10:00.382 25.00000% : 8527.576us 00:10:00.382 50.00000% : 8790.773us 00:10:00.382 75.00000% : 9001.330us 00:10:00.382 90.00000% : 9264.527us 00:10:00.382 95.00000% : 9475.084us 00:10:00.382 98.00000% : 10001.478us 00:10:00.382 99.00000% : 11791.216us 00:10:00.382 99.50000% : 37268.665us 00:10:00.382 99.90000% : 44217.060us 00:10:00.382 99.99000% : 44427.618us 00:10:00.382 99.99900% : 44427.618us 00:10:00.382 99.99990% : 44427.618us 00:10:00.382 99.99999% : 44427.618us 00:10:00.382 00:10:00.382 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:00.382 ================================================================================= 00:10:00.382 1.00000% : 8159.100us 00:10:00.382 10.00000% : 8369.658us 00:10:00.382 25.00000% : 8527.576us 00:10:00.382 50.00000% : 8790.773us 00:10:00.382 75.00000% : 9001.330us 00:10:00.382 90.00000% : 9264.527us 00:10:00.382 95.00000% : 9475.084us 00:10:00.382 98.00000% : 10001.478us 00:10:00.382 99.00000% : 12159.692us 00:10:00.382 99.50000% : 35584.206us 00:10:00.382 99.90000% : 42322.043us 00:10:00.382 99.99000% : 42532.601us 00:10:00.382 99.99900% : 42532.601us 00:10:00.382 99.99990% : 42532.601us 00:10:00.382 99.99999% : 42532.601us 00:10:00.382 00:10:00.382 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:00.382 ================================================================================= 00:10:00.382 1.00000% : 8159.100us 00:10:00.382 10.00000% : 8369.658us 00:10:00.382 25.00000% : 8527.576us 00:10:00.382 50.00000% : 8790.773us 00:10:00.382 75.00000% : 9001.330us 00:10:00.382 90.00000% : 9264.527us 00:10:00.382 95.00000% : 9527.724us 00:10:00.382 98.00000% : 10475.232us 00:10:00.382 99.00000% : 12475.528us 00:10:00.382 99.50000% : 28214.696us 00:10:00.382 99.90000% : 34952.533us 00:10:00.382 99.99000% : 35373.648us 00:10:00.382 99.99900% : 35373.648us 00:10:00.382 99.99990% : 35373.648us 00:10:00.382 99.99999% : 35373.648us 00:10:00.382 00:10:00.382 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:00.382 ============================================================================== 00:10:00.382 Range in us Cumulative IO count 00:10:00.382 7790.625 - 7843.264: 0.0071% ( 1) 00:10:00.382 7843.264 - 7895.904: 0.0424% ( 5) 00:10:00.382 7895.904 - 7948.543: 0.1838% ( 20) 00:10:00.382 7948.543 - 8001.182: 0.5585% ( 53) 00:10:00.382 8001.182 - 8053.822: 1.1454% ( 83) 00:10:00.382 8053.822 - 8106.461: 2.3756% ( 174) 00:10:00.382 8106.461 - 8159.100: 4.2138% ( 260) 00:10:00.382 8159.100 - 8211.740: 6.6247% ( 341) 00:10:00.382 8211.740 - 8264.379: 9.7497% ( 442) 00:10:00.382 8264.379 - 8317.018: 13.2070% ( 489) 00:10:00.382 8317.018 - 8369.658: 16.8481% ( 515) 00:10:00.382 8369.658 - 8422.297: 20.7508% ( 552) 00:10:00.382 8422.297 - 8474.937: 24.9081% ( 588) 00:10:00.382 8474.937 - 8527.576: 29.1643% ( 602) 00:10:00.382 8527.576 - 8580.215: 33.6538% ( 635) 00:10:00.382 8580.215 - 8632.855: 38.2282% ( 647) 00:10:00.382 8632.855 - 8685.494: 43.0713% ( 685) 00:10:00.382 8685.494 - 8738.133: 47.8436% ( 675) 00:10:00.382 8738.133 - 8790.773: 52.7927% ( 700) 00:10:00.382 8790.773 - 8843.412: 57.8973% ( 722) 00:10:00.382 8843.412 - 8896.051: 62.7616% ( 688) 00:10:00.382 8896.051 - 8948.691: 67.6965% ( 698) 00:10:00.382 8948.691 - 9001.330: 72.5184% ( 682) 00:10:00.382 9001.330 - 9053.969: 76.9019% ( 620) 00:10:00.382 9053.969 - 9106.609: 80.8470% ( 558) 00:10:00.382 9106.609 - 9159.248: 84.1063% ( 461) 00:10:00.382 9159.248 - 9211.888: 86.8071% ( 382) 00:10:00.382 9211.888 - 9264.527: 89.0342% ( 315) 00:10:00.382 9264.527 - 9317.166: 90.7876% ( 248) 00:10:00.382 9317.166 - 9369.806: 92.2158% ( 202) 00:10:00.382 9369.806 - 9422.445: 93.4884% ( 180) 00:10:00.382 9422.445 - 9475.084: 94.5348% ( 148) 00:10:00.382 9475.084 - 9527.724: 95.3832% ( 120) 00:10:00.382 9527.724 - 9580.363: 96.0619% ( 96) 00:10:00.382 9580.363 - 9633.002: 96.6063% ( 77) 00:10:00.382 9633.002 - 9685.642: 96.9952% ( 55) 00:10:00.382 9685.642 - 9738.281: 97.3063% ( 44) 00:10:00.382 9738.281 - 9790.920: 97.4901% ( 26) 00:10:00.382 9790.920 - 9843.560: 97.6174% ( 18) 00:10:00.382 9843.560 - 9896.199: 97.7093% ( 13) 00:10:00.382 9896.199 - 9948.839: 97.8224% ( 16) 00:10:00.382 9948.839 - 10001.478: 97.9002% ( 11) 00:10:00.382 10001.478 - 10054.117: 97.9850% ( 12) 00:10:00.382 10054.117 - 10106.757: 98.0557% ( 10) 00:10:00.382 10106.757 - 10159.396: 98.1406% ( 12) 00:10:00.382 10159.396 - 10212.035: 98.2042% ( 9) 00:10:00.382 10212.035 - 10264.675: 98.2607% ( 8) 00:10:00.382 10264.675 - 10317.314: 98.2820% ( 3) 00:10:00.382 10317.314 - 10369.953: 98.3032% ( 3) 00:10:00.382 10369.953 - 10422.593: 98.3173% ( 2) 00:10:00.382 10422.593 - 10475.232: 98.3314% ( 2) 00:10:00.382 10475.232 - 10527.871: 98.3597% ( 4) 00:10:00.382 10527.871 - 10580.511: 98.3880% ( 4) 00:10:00.382 10580.511 - 10633.150: 98.4234% ( 5) 00:10:00.382 10633.150 - 10685.790: 98.4658% ( 6) 00:10:00.382 10685.790 - 10738.429: 98.5082% ( 6) 00:10:00.382 10738.429 - 10791.068: 98.5436% ( 5) 00:10:00.382 10791.068 - 10843.708: 98.5718% ( 4) 00:10:00.382 10843.708 - 10896.347: 98.6143% ( 6) 00:10:00.382 10896.347 - 10948.986: 98.6567% ( 6) 00:10:00.382 10948.986 - 11001.626: 98.6991% ( 6) 00:10:00.382 11001.626 - 11054.265: 98.7344% ( 5) 00:10:00.382 11054.265 - 11106.904: 98.7769% ( 6) 00:10:00.382 11106.904 - 11159.544: 98.8122% ( 5) 00:10:00.382 11159.544 - 11212.183: 98.8546% ( 6) 00:10:00.382 11212.183 - 11264.822: 98.9041% ( 7) 00:10:00.382 11264.822 - 11317.462: 98.9324% ( 4) 00:10:00.382 11317.462 - 11370.101: 98.9678% ( 5) 00:10:00.382 11370.101 - 11422.741: 99.0173% ( 7) 00:10:00.382 11422.741 - 11475.380: 99.0455% ( 4) 00:10:00.382 11475.380 - 11528.019: 99.0738% ( 4) 00:10:00.382 11528.019 - 11580.659: 99.0950% ( 3) 00:10:00.382 40216.469 - 40427.027: 99.1374% ( 6) 00:10:00.382 40427.027 - 40637.584: 99.1799% ( 6) 00:10:00.382 40637.584 - 40848.141: 99.2364% ( 8) 00:10:00.382 40848.141 - 41058.699: 99.2930% ( 8) 00:10:00.382 41058.699 - 41269.256: 99.3425% ( 7) 00:10:00.382 41269.256 - 41479.814: 99.4061% ( 9) 00:10:00.382 41479.814 - 41690.371: 99.4627% ( 8) 00:10:00.382 41690.371 - 41900.929: 99.5122% ( 7) 00:10:00.382 41900.929 - 42111.486: 99.5475% ( 5) 00:10:00.382 47375.422 - 47585.979: 99.5617% ( 2) 00:10:00.382 47585.979 - 47796.537: 99.6182% ( 8) 00:10:00.382 47796.537 - 48007.094: 99.6748% ( 8) 00:10:00.382 48007.094 - 48217.651: 99.7172% ( 6) 00:10:00.382 48217.651 - 48428.209: 99.7667% ( 7) 00:10:00.382 48428.209 - 48638.766: 99.8162% ( 7) 00:10:00.382 48638.766 - 48849.324: 99.8798% ( 9) 00:10:00.382 48849.324 - 49059.881: 99.9222% ( 6) 00:10:00.382 49059.881 - 49270.439: 99.9788% ( 8) 00:10:00.382 49270.439 - 49480.996: 100.0000% ( 3) 00:10:00.382 00:10:00.382 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:00.382 ============================================================================== 00:10:00.382 Range in us Cumulative IO count 00:10:00.382 7895.904 - 7948.543: 0.0071% ( 1) 00:10:00.382 7948.543 - 8001.182: 0.0424% ( 5) 00:10:00.383 8001.182 - 8053.822: 0.2121% ( 24) 00:10:00.383 8053.822 - 8106.461: 0.7848% ( 81) 00:10:00.383 8106.461 - 8159.100: 1.7605% ( 138) 00:10:00.383 8159.100 - 8211.740: 3.3937% ( 231) 00:10:00.383 8211.740 - 8264.379: 5.7622% ( 335) 00:10:00.383 8264.379 - 8317.018: 8.9791% ( 455) 00:10:00.383 8317.018 - 8369.658: 12.9171% ( 557) 00:10:00.383 8369.658 - 8422.297: 17.0956% ( 591) 00:10:00.383 8422.297 - 8474.937: 21.6982% ( 651) 00:10:00.383 8474.937 - 8527.576: 26.6544% ( 701) 00:10:00.383 8527.576 - 8580.215: 31.7237% ( 717) 00:10:00.383 8580.215 - 8632.855: 37.0263% ( 750) 00:10:00.383 8632.855 - 8685.494: 42.4420% ( 766) 00:10:00.383 8685.494 - 8738.133: 48.0911% ( 799) 00:10:00.383 8738.133 - 8790.773: 53.8320% ( 812) 00:10:00.383 8790.773 - 8843.412: 59.6719% ( 826) 00:10:00.383 8843.412 - 8896.051: 65.3846% ( 808) 00:10:00.383 8896.051 - 8948.691: 70.8923% ( 779) 00:10:00.383 8948.691 - 9001.330: 75.9403% ( 714) 00:10:00.383 9001.330 - 9053.969: 80.2319% ( 607) 00:10:00.383 9053.969 - 9106.609: 83.7458% ( 497) 00:10:00.383 9106.609 - 9159.248: 86.6233% ( 407) 00:10:00.383 9159.248 - 9211.888: 88.8928% ( 321) 00:10:00.383 9211.888 - 9264.527: 90.6038% ( 242) 00:10:00.383 9264.527 - 9317.166: 92.0814% ( 209) 00:10:00.383 9317.166 - 9369.806: 93.4036% ( 187) 00:10:00.383 9369.806 - 9422.445: 94.4570% ( 149) 00:10:00.383 9422.445 - 9475.084: 95.2630% ( 114) 00:10:00.383 9475.084 - 9527.724: 95.9630% ( 99) 00:10:00.383 9527.724 - 9580.363: 96.4649% ( 71) 00:10:00.383 9580.363 - 9633.002: 96.8891% ( 60) 00:10:00.383 9633.002 - 9685.642: 97.2073% ( 45) 00:10:00.383 9685.642 - 9738.281: 97.4053% ( 28) 00:10:00.383 9738.281 - 9790.920: 97.5749% ( 24) 00:10:00.383 9790.920 - 9843.560: 97.7234% ( 21) 00:10:00.383 9843.560 - 9896.199: 97.8436% ( 17) 00:10:00.383 9896.199 - 9948.839: 97.9638% ( 17) 00:10:00.383 9948.839 - 10001.478: 98.0557% ( 13) 00:10:00.383 10001.478 - 10054.117: 98.1264% ( 10) 00:10:00.383 10054.117 - 10106.757: 98.1688% ( 6) 00:10:00.383 10106.757 - 10159.396: 98.1900% ( 3) 00:10:00.383 10264.675 - 10317.314: 98.1971% ( 1) 00:10:00.383 10317.314 - 10369.953: 98.2254% ( 4) 00:10:00.383 10369.953 - 10422.593: 98.2537% ( 4) 00:10:00.383 10422.593 - 10475.232: 98.2749% ( 3) 00:10:00.383 10475.232 - 10527.871: 98.3244% ( 7) 00:10:00.383 10527.871 - 10580.511: 98.3668% ( 6) 00:10:00.383 10580.511 - 10633.150: 98.4021% ( 5) 00:10:00.383 10633.150 - 10685.790: 98.4516% ( 7) 00:10:00.383 10685.790 - 10738.429: 98.4870% ( 5) 00:10:00.383 10738.429 - 10791.068: 98.5294% ( 6) 00:10:00.383 10791.068 - 10843.708: 98.5789% ( 7) 00:10:00.383 10843.708 - 10896.347: 98.6213% ( 6) 00:10:00.383 10896.347 - 10948.986: 98.6637% ( 6) 00:10:00.383 10948.986 - 11001.626: 98.7132% ( 7) 00:10:00.383 11001.626 - 11054.265: 98.7557% ( 6) 00:10:00.383 11054.265 - 11106.904: 98.8051% ( 7) 00:10:00.383 11106.904 - 11159.544: 98.8476% ( 6) 00:10:00.383 11159.544 - 11212.183: 98.8758% ( 4) 00:10:00.383 11212.183 - 11264.822: 98.8900% ( 2) 00:10:00.383 11264.822 - 11317.462: 98.9112% ( 3) 00:10:00.383 11317.462 - 11370.101: 98.9253% ( 2) 00:10:00.383 11370.101 - 11422.741: 98.9395% ( 2) 00:10:00.383 11422.741 - 11475.380: 98.9607% ( 3) 00:10:00.383 11475.380 - 11528.019: 98.9819% ( 3) 00:10:00.383 11528.019 - 11580.659: 98.9960% ( 2) 00:10:00.383 11580.659 - 11633.298: 99.0173% ( 3) 00:10:00.383 11633.298 - 11685.937: 99.0314% ( 2) 00:10:00.383 11685.937 - 11738.577: 99.0526% ( 3) 00:10:00.383 11738.577 - 11791.216: 99.0667% ( 2) 00:10:00.383 11791.216 - 11843.855: 99.0880% ( 3) 00:10:00.383 11843.855 - 11896.495: 99.0950% ( 1) 00:10:00.383 38532.010 - 38742.567: 99.1233% ( 4) 00:10:00.383 38742.567 - 38953.124: 99.1799% ( 8) 00:10:00.383 38953.124 - 39163.682: 99.2364% ( 8) 00:10:00.383 39163.682 - 39374.239: 99.2859% ( 7) 00:10:00.383 39374.239 - 39584.797: 99.3495% ( 9) 00:10:00.383 39584.797 - 39795.354: 99.3990% ( 7) 00:10:00.383 39795.354 - 40005.912: 99.4556% ( 8) 00:10:00.383 40005.912 - 40216.469: 99.5122% ( 8) 00:10:00.383 40216.469 - 40427.027: 99.5475% ( 5) 00:10:00.383 45690.962 - 45901.520: 99.5758% ( 4) 00:10:00.383 45901.520 - 46112.077: 99.6324% ( 8) 00:10:00.383 46112.077 - 46322.635: 99.6818% ( 7) 00:10:00.383 46322.635 - 46533.192: 99.7384% ( 8) 00:10:00.383 46533.192 - 46743.749: 99.7808% ( 6) 00:10:00.383 46743.749 - 46954.307: 99.8374% ( 8) 00:10:00.383 46954.307 - 47164.864: 99.8939% ( 8) 00:10:00.383 47164.864 - 47375.422: 99.9576% ( 9) 00:10:00.383 47375.422 - 47585.979: 100.0000% ( 6) 00:10:00.383 00:10:00.383 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:00.383 ============================================================================== 00:10:00.383 Range in us Cumulative IO count 00:10:00.383 7948.543 - 8001.182: 0.0283% ( 4) 00:10:00.383 8001.182 - 8053.822: 0.1838% ( 22) 00:10:00.383 8053.822 - 8106.461: 0.8201% ( 90) 00:10:00.383 8106.461 - 8159.100: 1.9372% ( 158) 00:10:00.383 8159.100 - 8211.740: 3.6128% ( 237) 00:10:00.383 8211.740 - 8264.379: 5.9672% ( 333) 00:10:00.383 8264.379 - 8317.018: 9.1629% ( 452) 00:10:00.383 8317.018 - 8369.658: 12.9949% ( 542) 00:10:00.383 8369.658 - 8422.297: 17.1380% ( 586) 00:10:00.383 8422.297 - 8474.937: 21.7972% ( 659) 00:10:00.383 8474.937 - 8527.576: 26.6049% ( 680) 00:10:00.383 8527.576 - 8580.215: 31.6389% ( 712) 00:10:00.383 8580.215 - 8632.855: 36.9627% ( 753) 00:10:00.383 8632.855 - 8685.494: 42.4632% ( 778) 00:10:00.383 8685.494 - 8738.133: 48.2325% ( 816) 00:10:00.383 8738.133 - 8790.773: 54.0229% ( 819) 00:10:00.383 8790.773 - 8843.412: 59.8558% ( 825) 00:10:00.383 8843.412 - 8896.051: 65.6533% ( 820) 00:10:00.383 8896.051 - 8948.691: 70.9417% ( 748) 00:10:00.383 8948.691 - 9001.330: 75.7919% ( 686) 00:10:00.383 9001.330 - 9053.969: 80.0834% ( 607) 00:10:00.383 9053.969 - 9106.609: 83.7811% ( 523) 00:10:00.383 9106.609 - 9159.248: 86.6587% ( 407) 00:10:00.383 9159.248 - 9211.888: 88.8575% ( 311) 00:10:00.383 9211.888 - 9264.527: 90.6886% ( 259) 00:10:00.383 9264.527 - 9317.166: 92.2582% ( 222) 00:10:00.383 9317.166 - 9369.806: 93.5945% ( 189) 00:10:00.383 9369.806 - 9422.445: 94.7257% ( 160) 00:10:00.383 9422.445 - 9475.084: 95.5529% ( 117) 00:10:00.383 9475.084 - 9527.724: 96.2033% ( 92) 00:10:00.383 9527.724 - 9580.363: 96.6841% ( 68) 00:10:00.383 9580.363 - 9633.002: 97.0730% ( 55) 00:10:00.383 9633.002 - 9685.642: 97.2992% ( 32) 00:10:00.383 9685.642 - 9738.281: 97.5113% ( 30) 00:10:00.383 9738.281 - 9790.920: 97.6881% ( 25) 00:10:00.383 9790.920 - 9843.560: 97.8153% ( 18) 00:10:00.383 9843.560 - 9896.199: 97.9285% ( 16) 00:10:00.383 9896.199 - 9948.839: 98.0133% ( 12) 00:10:00.383 9948.839 - 10001.478: 98.0840% ( 10) 00:10:00.383 10001.478 - 10054.117: 98.1476% ( 9) 00:10:00.383 10054.117 - 10106.757: 98.2042% ( 8) 00:10:00.383 10106.757 - 10159.396: 98.2466% ( 6) 00:10:00.383 10159.396 - 10212.035: 98.3032% ( 8) 00:10:00.383 10212.035 - 10264.675: 98.3314% ( 4) 00:10:00.383 10264.675 - 10317.314: 98.3597% ( 4) 00:10:00.383 10317.314 - 10369.953: 98.4092% ( 7) 00:10:00.383 10369.953 - 10422.593: 98.4446% ( 5) 00:10:00.383 10422.593 - 10475.232: 98.4870% ( 6) 00:10:00.383 10475.232 - 10527.871: 98.5294% ( 6) 00:10:00.383 10527.871 - 10580.511: 98.5718% ( 6) 00:10:00.383 10580.511 - 10633.150: 98.6213% ( 7) 00:10:00.383 10633.150 - 10685.790: 98.6637% ( 6) 00:10:00.383 10685.790 - 10738.429: 98.7062% ( 6) 00:10:00.383 10738.429 - 10791.068: 98.7486% ( 6) 00:10:00.383 10791.068 - 10843.708: 98.8051% ( 8) 00:10:00.383 10843.708 - 10896.347: 98.8264% ( 3) 00:10:00.383 10896.347 - 10948.986: 98.8405% ( 2) 00:10:00.383 10948.986 - 11001.626: 98.8546% ( 2) 00:10:00.383 11001.626 - 11054.265: 98.8688% ( 2) 00:10:00.383 11054.265 - 11106.904: 98.8829% ( 2) 00:10:00.383 11106.904 - 11159.544: 98.9041% ( 3) 00:10:00.383 11159.544 - 11212.183: 98.9183% ( 2) 00:10:00.383 11212.183 - 11264.822: 98.9395% ( 3) 00:10:00.383 11264.822 - 11317.462: 98.9536% ( 2) 00:10:00.383 11317.462 - 11370.101: 98.9748% ( 3) 00:10:00.383 11370.101 - 11422.741: 98.9890% ( 2) 00:10:00.383 11422.741 - 11475.380: 99.0031% ( 2) 00:10:00.383 11475.380 - 11528.019: 99.0243% ( 3) 00:10:00.383 11528.019 - 11580.659: 99.0385% ( 2) 00:10:00.383 11580.659 - 11633.298: 99.0597% ( 3) 00:10:00.383 11633.298 - 11685.937: 99.0738% ( 2) 00:10:00.383 11685.937 - 11738.577: 99.0880% ( 2) 00:10:00.383 11738.577 - 11791.216: 99.0950% ( 1) 00:10:00.383 37479.222 - 37689.780: 99.1445% ( 7) 00:10:00.383 37689.780 - 37900.337: 99.2011% ( 8) 00:10:00.384 37900.337 - 38110.895: 99.2506% ( 7) 00:10:00.384 38110.895 - 38321.452: 99.3071% ( 8) 00:10:00.384 38321.452 - 38532.010: 99.3637% ( 8) 00:10:00.384 38532.010 - 38742.567: 99.4202% ( 8) 00:10:00.384 38742.567 - 38953.124: 99.4697% ( 7) 00:10:00.384 38953.124 - 39163.682: 99.5263% ( 8) 00:10:00.384 39163.682 - 39374.239: 99.5475% ( 3) 00:10:00.384 44427.618 - 44638.175: 99.5617% ( 2) 00:10:00.384 44638.175 - 44848.733: 99.6182% ( 8) 00:10:00.384 44848.733 - 45059.290: 99.6677% ( 7) 00:10:00.384 45059.290 - 45269.847: 99.7243% ( 8) 00:10:00.384 45269.847 - 45480.405: 99.7808% ( 8) 00:10:00.384 45480.405 - 45690.962: 99.8303% ( 7) 00:10:00.384 45690.962 - 45901.520: 99.8798% ( 7) 00:10:00.384 45901.520 - 46112.077: 99.9293% ( 7) 00:10:00.384 46112.077 - 46322.635: 99.9859% ( 8) 00:10:00.384 46322.635 - 46533.192: 100.0000% ( 2) 00:10:00.384 00:10:00.384 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:00.384 ============================================================================== 00:10:00.384 Range in us Cumulative IO count 00:10:00.384 8001.182 - 8053.822: 0.1343% ( 19) 00:10:00.384 8053.822 - 8106.461: 0.6434% ( 72) 00:10:00.384 8106.461 - 8159.100: 1.5413% ( 127) 00:10:00.384 8159.100 - 8211.740: 3.1604% ( 229) 00:10:00.384 8211.740 - 8264.379: 5.6278% ( 349) 00:10:00.384 8264.379 - 8317.018: 9.0003% ( 477) 00:10:00.384 8317.018 - 8369.658: 12.8747% ( 548) 00:10:00.384 8369.658 - 8422.297: 16.9966% ( 583) 00:10:00.384 8422.297 - 8474.937: 21.7124% ( 667) 00:10:00.384 8474.937 - 8527.576: 26.4635% ( 672) 00:10:00.384 8527.576 - 8580.215: 31.6035% ( 727) 00:10:00.384 8580.215 - 8632.855: 36.9910% ( 762) 00:10:00.384 8632.855 - 8685.494: 42.6117% ( 795) 00:10:00.384 8685.494 - 8738.133: 48.3102% ( 806) 00:10:00.384 8738.133 - 8790.773: 54.1572% ( 827) 00:10:00.384 8790.773 - 8843.412: 60.0749% ( 837) 00:10:00.384 8843.412 - 8896.051: 65.7876% ( 808) 00:10:00.384 8896.051 - 8948.691: 71.2458% ( 772) 00:10:00.384 8948.691 - 9001.330: 76.0605% ( 681) 00:10:00.384 9001.330 - 9053.969: 80.2814% ( 597) 00:10:00.384 9053.969 - 9106.609: 83.7175% ( 486) 00:10:00.384 9106.609 - 9159.248: 86.5031% ( 394) 00:10:00.384 9159.248 - 9211.888: 88.8363% ( 330) 00:10:00.384 9211.888 - 9264.527: 90.7381% ( 269) 00:10:00.384 9264.527 - 9317.166: 92.4137% ( 237) 00:10:00.384 9317.166 - 9369.806: 93.7359% ( 187) 00:10:00.384 9369.806 - 9422.445: 94.8176% ( 153) 00:10:00.384 9422.445 - 9475.084: 95.5529% ( 104) 00:10:00.384 9475.084 - 9527.724: 96.1397% ( 83) 00:10:00.384 9527.724 - 9580.363: 96.6205% ( 68) 00:10:00.384 9580.363 - 9633.002: 96.9245% ( 43) 00:10:00.384 9633.002 - 9685.642: 97.1861% ( 37) 00:10:00.384 9685.642 - 9738.281: 97.4406% ( 36) 00:10:00.384 9738.281 - 9790.920: 97.6456% ( 29) 00:10:00.384 9790.920 - 9843.560: 97.7658% ( 17) 00:10:00.384 9843.560 - 9896.199: 97.8648% ( 14) 00:10:00.384 9896.199 - 9948.839: 97.9709% ( 15) 00:10:00.384 9948.839 - 10001.478: 98.0557% ( 12) 00:10:00.384 10001.478 - 10054.117: 98.1335% ( 11) 00:10:00.384 10054.117 - 10106.757: 98.2042% ( 10) 00:10:00.384 10106.757 - 10159.396: 98.2607% ( 8) 00:10:00.384 10159.396 - 10212.035: 98.3102% ( 7) 00:10:00.384 10212.035 - 10264.675: 98.3527% ( 6) 00:10:00.384 10264.675 - 10317.314: 98.3951% ( 6) 00:10:00.384 10317.314 - 10369.953: 98.4446% ( 7) 00:10:00.384 10369.953 - 10422.593: 98.4870% ( 6) 00:10:00.384 10422.593 - 10475.232: 98.5223% ( 5) 00:10:00.384 10475.232 - 10527.871: 98.5506% ( 4) 00:10:00.384 10527.871 - 10580.511: 98.5789% ( 4) 00:10:00.384 10580.511 - 10633.150: 98.6072% ( 4) 00:10:00.384 10633.150 - 10685.790: 98.6425% ( 5) 00:10:00.384 10685.790 - 10738.429: 98.6637% ( 3) 00:10:00.384 10738.429 - 10791.068: 98.6779% ( 2) 00:10:00.384 10791.068 - 10843.708: 98.6991% ( 3) 00:10:00.384 10843.708 - 10896.347: 98.7203% ( 3) 00:10:00.384 10896.347 - 10948.986: 98.7344% ( 2) 00:10:00.384 10948.986 - 11001.626: 98.7557% ( 3) 00:10:00.384 11001.626 - 11054.265: 98.7698% ( 2) 00:10:00.384 11054.265 - 11106.904: 98.7910% ( 3) 00:10:00.384 11106.904 - 11159.544: 98.8122% ( 3) 00:10:00.384 11159.544 - 11212.183: 98.8334% ( 3) 00:10:00.384 11212.183 - 11264.822: 98.8476% ( 2) 00:10:00.384 11264.822 - 11317.462: 98.8617% ( 2) 00:10:00.384 11317.462 - 11370.101: 98.8829% ( 3) 00:10:00.384 11370.101 - 11422.741: 98.8971% ( 2) 00:10:00.384 11422.741 - 11475.380: 98.9183% ( 3) 00:10:00.384 11475.380 - 11528.019: 98.9324% ( 2) 00:10:00.384 11528.019 - 11580.659: 98.9536% ( 3) 00:10:00.384 11580.659 - 11633.298: 98.9607% ( 1) 00:10:00.384 11633.298 - 11685.937: 98.9819% ( 3) 00:10:00.384 11685.937 - 11738.577: 98.9960% ( 2) 00:10:00.384 11738.577 - 11791.216: 99.0102% ( 2) 00:10:00.384 11791.216 - 11843.855: 99.0243% ( 2) 00:10:00.384 11843.855 - 11896.495: 99.0455% ( 3) 00:10:00.384 11896.495 - 11949.134: 99.0597% ( 2) 00:10:00.384 11949.134 - 12001.773: 99.0809% ( 3) 00:10:00.384 12001.773 - 12054.413: 99.0950% ( 2) 00:10:00.384 35584.206 - 35794.763: 99.1092% ( 2) 00:10:00.384 35794.763 - 36005.320: 99.1657% ( 8) 00:10:00.384 36005.320 - 36215.878: 99.2223% ( 8) 00:10:00.384 36215.878 - 36426.435: 99.2788% ( 8) 00:10:00.384 36426.435 - 36636.993: 99.3354% ( 8) 00:10:00.384 36636.993 - 36847.550: 99.3920% ( 8) 00:10:00.384 36847.550 - 37058.108: 99.4485% ( 8) 00:10:00.384 37058.108 - 37268.665: 99.5051% ( 8) 00:10:00.384 37268.665 - 37479.222: 99.5475% ( 6) 00:10:00.384 42532.601 - 42743.158: 99.5687% ( 3) 00:10:00.384 42743.158 - 42953.716: 99.6324% ( 9) 00:10:00.384 42953.716 - 43164.273: 99.6889% ( 8) 00:10:00.384 43164.273 - 43374.831: 99.7384% ( 7) 00:10:00.384 43374.831 - 43585.388: 99.7950% ( 8) 00:10:00.384 43585.388 - 43795.945: 99.8515% ( 8) 00:10:00.384 43795.945 - 44006.503: 99.8939% ( 6) 00:10:00.384 44006.503 - 44217.060: 99.9434% ( 7) 00:10:00.384 44217.060 - 44427.618: 100.0000% ( 8) 00:10:00.384 00:10:00.384 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:00.384 ============================================================================== 00:10:00.384 Range in us Cumulative IO count 00:10:00.384 7948.543 - 8001.182: 0.0283% ( 4) 00:10:00.384 8001.182 - 8053.822: 0.1768% ( 21) 00:10:00.384 8053.822 - 8106.461: 0.6505% ( 67) 00:10:00.384 8106.461 - 8159.100: 1.6403% ( 140) 00:10:00.384 8159.100 - 8211.740: 3.3159% ( 237) 00:10:00.384 8211.740 - 8264.379: 6.1086% ( 395) 00:10:00.384 8264.379 - 8317.018: 9.4245% ( 469) 00:10:00.384 8317.018 - 8369.658: 13.2424% ( 540) 00:10:00.384 8369.658 - 8422.297: 17.5057% ( 603) 00:10:00.384 8422.297 - 8474.937: 22.1295% ( 654) 00:10:00.384 8474.937 - 8527.576: 26.9655% ( 684) 00:10:00.384 8527.576 - 8580.215: 32.0913% ( 725) 00:10:00.384 8580.215 - 8632.855: 37.3798% ( 748) 00:10:00.384 8632.855 - 8685.494: 42.8097% ( 768) 00:10:00.384 8685.494 - 8738.133: 48.4021% ( 791) 00:10:00.384 8738.133 - 8790.773: 54.0512% ( 799) 00:10:00.384 8790.773 - 8843.412: 59.8982% ( 827) 00:10:00.384 8843.412 - 8896.051: 65.4977% ( 792) 00:10:00.384 8896.051 - 8948.691: 70.8569% ( 758) 00:10:00.384 8948.691 - 9001.330: 75.9615% ( 722) 00:10:00.384 9001.330 - 9053.969: 80.1329% ( 590) 00:10:00.384 9053.969 - 9106.609: 83.5054% ( 477) 00:10:00.384 9106.609 - 9159.248: 86.2274% ( 385) 00:10:00.384 9159.248 - 9211.888: 88.4898% ( 320) 00:10:00.384 9211.888 - 9264.527: 90.3917% ( 269) 00:10:00.384 9264.527 - 9317.166: 92.0744% ( 238) 00:10:00.384 9317.166 - 9369.806: 93.3682% ( 183) 00:10:00.384 9369.806 - 9422.445: 94.3792% ( 143) 00:10:00.384 9422.445 - 9475.084: 95.2135% ( 118) 00:10:00.384 9475.084 - 9527.724: 95.9276% ( 101) 00:10:00.384 9527.724 - 9580.363: 96.4437% ( 73) 00:10:00.384 9580.363 - 9633.002: 96.8043% ( 51) 00:10:00.384 9633.002 - 9685.642: 97.0800% ( 39) 00:10:00.384 9685.642 - 9738.281: 97.3416% ( 37) 00:10:00.384 9738.281 - 9790.920: 97.5396% ( 28) 00:10:00.384 9790.920 - 9843.560: 97.7022% ( 23) 00:10:00.384 9843.560 - 9896.199: 97.8295% ( 18) 00:10:00.384 9896.199 - 9948.839: 97.9214% ( 13) 00:10:00.384 9948.839 - 10001.478: 98.0204% ( 14) 00:10:00.384 10001.478 - 10054.117: 98.1052% ( 12) 00:10:00.384 10054.117 - 10106.757: 98.1688% ( 9) 00:10:00.384 10106.757 - 10159.396: 98.2325% ( 9) 00:10:00.384 10159.396 - 10212.035: 98.2749% ( 6) 00:10:00.384 10212.035 - 10264.675: 98.3314% ( 8) 00:10:00.384 10264.675 - 10317.314: 98.3739% ( 6) 00:10:00.384 10317.314 - 10369.953: 98.4234% ( 7) 00:10:00.384 10369.953 - 10422.593: 98.4658% ( 6) 00:10:00.384 10422.593 - 10475.232: 98.5082% ( 6) 00:10:00.384 10475.232 - 10527.871: 98.5506% ( 6) 00:10:00.384 10527.871 - 10580.511: 98.5718% ( 3) 00:10:00.384 10580.511 - 10633.150: 98.5860% ( 2) 00:10:00.384 10633.150 - 10685.790: 98.6072% ( 3) 00:10:00.384 10685.790 - 10738.429: 98.6213% ( 2) 00:10:00.384 10738.429 - 10791.068: 98.6425% ( 3) 00:10:00.384 11001.626 - 11054.265: 98.6567% ( 2) 00:10:00.384 11054.265 - 11106.904: 98.6779% ( 3) 00:10:00.384 11106.904 - 11159.544: 98.6920% ( 2) 00:10:00.385 11159.544 - 11212.183: 98.7062% ( 2) 00:10:00.385 11212.183 - 11264.822: 98.7203% ( 2) 00:10:00.385 11264.822 - 11317.462: 98.7415% ( 3) 00:10:00.385 11317.462 - 11370.101: 98.7557% ( 2) 00:10:00.385 11370.101 - 11422.741: 98.7769% ( 3) 00:10:00.385 11422.741 - 11475.380: 98.7910% ( 2) 00:10:00.385 11475.380 - 11528.019: 98.8122% ( 3) 00:10:00.385 11528.019 - 11580.659: 98.8334% ( 3) 00:10:00.385 11580.659 - 11633.298: 98.8476% ( 2) 00:10:00.385 11633.298 - 11685.937: 98.8617% ( 2) 00:10:00.385 11685.937 - 11738.577: 98.8758% ( 2) 00:10:00.385 11738.577 - 11791.216: 98.8971% ( 3) 00:10:00.385 11791.216 - 11843.855: 98.9183% ( 3) 00:10:00.385 11843.855 - 11896.495: 98.9324% ( 2) 00:10:00.385 11896.495 - 11949.134: 98.9465% ( 2) 00:10:00.385 11949.134 - 12001.773: 98.9607% ( 2) 00:10:00.385 12001.773 - 12054.413: 98.9819% ( 3) 00:10:00.385 12054.413 - 12107.052: 98.9960% ( 2) 00:10:00.385 12107.052 - 12159.692: 99.0102% ( 2) 00:10:00.385 12159.692 - 12212.331: 99.0243% ( 2) 00:10:00.385 12212.331 - 12264.970: 99.0455% ( 3) 00:10:00.385 12264.970 - 12317.610: 99.0597% ( 2) 00:10:00.385 12317.610 - 12370.249: 99.0809% ( 3) 00:10:00.385 12370.249 - 12422.888: 99.0950% ( 2) 00:10:00.385 33899.746 - 34110.304: 99.1445% ( 7) 00:10:00.385 34110.304 - 34320.861: 99.1940% ( 7) 00:10:00.385 34320.861 - 34531.418: 99.2576% ( 9) 00:10:00.385 34531.418 - 34741.976: 99.3071% ( 7) 00:10:00.385 34741.976 - 34952.533: 99.3708% ( 9) 00:10:00.385 34952.533 - 35163.091: 99.4202% ( 7) 00:10:00.385 35163.091 - 35373.648: 99.4768% ( 8) 00:10:00.385 35373.648 - 35584.206: 99.5334% ( 8) 00:10:00.385 35584.206 - 35794.763: 99.5475% ( 2) 00:10:00.385 40637.584 - 40848.141: 99.5758% ( 4) 00:10:00.385 40848.141 - 41058.699: 99.6324% ( 8) 00:10:00.385 41058.699 - 41269.256: 99.6818% ( 7) 00:10:00.385 41269.256 - 41479.814: 99.7313% ( 7) 00:10:00.385 41479.814 - 41690.371: 99.7808% ( 7) 00:10:00.385 41690.371 - 41900.929: 99.8374% ( 8) 00:10:00.385 41900.929 - 42111.486: 99.8939% ( 8) 00:10:00.385 42111.486 - 42322.043: 99.9505% ( 8) 00:10:00.385 42322.043 - 42532.601: 100.0000% ( 7) 00:10:00.385 00:10:00.385 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:00.385 ============================================================================== 00:10:00.385 Range in us Cumulative IO count 00:10:00.385 7948.543 - 8001.182: 0.0352% ( 5) 00:10:00.385 8001.182 - 8053.822: 0.2182% ( 26) 00:10:00.385 8053.822 - 8106.461: 0.8868% ( 95) 00:10:00.385 8106.461 - 8159.100: 1.8159% ( 132) 00:10:00.385 8159.100 - 8211.740: 3.6036% ( 254) 00:10:00.385 8211.740 - 8264.379: 6.2148% ( 371) 00:10:00.385 8264.379 - 8317.018: 9.2765% ( 435) 00:10:00.385 8317.018 - 8369.658: 13.0983% ( 543) 00:10:00.385 8369.658 - 8422.297: 17.4901% ( 624) 00:10:00.385 8422.297 - 8474.937: 22.1565% ( 663) 00:10:00.385 8474.937 - 8527.576: 26.8018% ( 660) 00:10:00.385 8527.576 - 8580.215: 31.8764% ( 721) 00:10:00.385 8580.215 - 8632.855: 37.0566% ( 736) 00:10:00.385 8632.855 - 8685.494: 42.4127% ( 761) 00:10:00.385 8685.494 - 8738.133: 47.9659% ( 789) 00:10:00.385 8738.133 - 8790.773: 53.6951% ( 814) 00:10:00.385 8790.773 - 8843.412: 59.4735% ( 821) 00:10:00.385 8843.412 - 8896.051: 65.0831% ( 797) 00:10:00.385 8896.051 - 8948.691: 70.4110% ( 757) 00:10:00.385 8948.691 - 9001.330: 75.4153% ( 711) 00:10:00.385 9001.330 - 9053.969: 79.7297% ( 613) 00:10:00.385 9053.969 - 9106.609: 83.1715% ( 489) 00:10:00.385 9106.609 - 9159.248: 86.0290% ( 406) 00:10:00.385 9159.248 - 9211.888: 88.2812% ( 320) 00:10:00.385 9211.888 - 9264.527: 90.0127% ( 246) 00:10:00.385 9264.527 - 9317.166: 91.5822% ( 223) 00:10:00.385 9317.166 - 9369.806: 92.9124% ( 189) 00:10:00.385 9369.806 - 9422.445: 93.9682% ( 150) 00:10:00.385 9422.445 - 9475.084: 94.8339% ( 123) 00:10:00.385 9475.084 - 9527.724: 95.5307% ( 99) 00:10:00.385 9527.724 - 9580.363: 96.1501% ( 88) 00:10:00.385 9580.363 - 9633.002: 96.5653% ( 59) 00:10:00.385 9633.002 - 9685.642: 96.8046% ( 34) 00:10:00.385 9685.642 - 9738.281: 96.9735% ( 24) 00:10:00.385 9738.281 - 9790.920: 97.1425% ( 24) 00:10:00.385 9790.920 - 9843.560: 97.2762% ( 19) 00:10:00.385 9843.560 - 9896.199: 97.3677% ( 13) 00:10:00.385 9896.199 - 9948.839: 97.4662% ( 14) 00:10:00.385 9948.839 - 10001.478: 97.5507% ( 12) 00:10:00.385 10001.478 - 10054.117: 97.6492% ( 14) 00:10:00.385 10054.117 - 10106.757: 97.7337% ( 12) 00:10:00.385 10106.757 - 10159.396: 97.7759% ( 6) 00:10:00.385 10159.396 - 10212.035: 97.8252% ( 7) 00:10:00.385 10212.035 - 10264.675: 97.8674% ( 6) 00:10:00.385 10264.675 - 10317.314: 97.9096% ( 6) 00:10:00.385 10317.314 - 10369.953: 97.9519% ( 6) 00:10:00.385 10369.953 - 10422.593: 97.9730% ( 3) 00:10:00.385 10422.593 - 10475.232: 98.0222% ( 7) 00:10:00.385 10475.232 - 10527.871: 98.0715% ( 7) 00:10:00.385 10527.871 - 10580.511: 98.1278% ( 8) 00:10:00.385 10580.511 - 10633.150: 98.1771% ( 7) 00:10:00.385 10633.150 - 10685.790: 98.2193% ( 6) 00:10:00.385 10685.790 - 10738.429: 98.2686% ( 7) 00:10:00.385 10738.429 - 10791.068: 98.3178% ( 7) 00:10:00.385 10791.068 - 10843.708: 98.3671% ( 7) 00:10:00.385 10843.708 - 10896.347: 98.4093% ( 6) 00:10:00.385 10896.347 - 10948.986: 98.4516% ( 6) 00:10:00.385 10948.986 - 11001.626: 98.5008% ( 7) 00:10:00.385 11001.626 - 11054.265: 98.5431% ( 6) 00:10:00.385 11054.265 - 11106.904: 98.5853% ( 6) 00:10:00.385 11106.904 - 11159.544: 98.6135% ( 4) 00:10:00.385 11159.544 - 11212.183: 98.6416% ( 4) 00:10:00.385 11212.183 - 11264.822: 98.6486% ( 1) 00:10:00.385 11370.101 - 11422.741: 98.6698% ( 3) 00:10:00.385 11422.741 - 11475.380: 98.6838% ( 2) 00:10:00.385 11475.380 - 11528.019: 98.6979% ( 2) 00:10:00.385 11528.019 - 11580.659: 98.7120% ( 2) 00:10:00.385 11580.659 - 11633.298: 98.7331% ( 3) 00:10:00.385 11633.298 - 11685.937: 98.7472% ( 2) 00:10:00.385 11685.937 - 11738.577: 98.7683% ( 3) 00:10:00.385 11738.577 - 11791.216: 98.7965% ( 4) 00:10:00.385 11791.216 - 11843.855: 98.8176% ( 3) 00:10:00.385 11843.855 - 11896.495: 98.8316% ( 2) 00:10:00.385 11896.495 - 11949.134: 98.8457% ( 2) 00:10:00.385 11949.134 - 12001.773: 98.8598% ( 2) 00:10:00.385 12001.773 - 12054.413: 98.8809% ( 3) 00:10:00.385 12054.413 - 12107.052: 98.8950% ( 2) 00:10:00.385 12107.052 - 12159.692: 98.9091% ( 2) 00:10:00.385 12159.692 - 12212.331: 98.9231% ( 2) 00:10:00.385 12212.331 - 12264.970: 98.9372% ( 2) 00:10:00.385 12264.970 - 12317.610: 98.9513% ( 2) 00:10:00.385 12317.610 - 12370.249: 98.9724% ( 3) 00:10:00.385 12370.249 - 12422.888: 98.9935% ( 3) 00:10:00.385 12422.888 - 12475.528: 99.0076% ( 2) 00:10:00.385 12475.528 - 12528.167: 99.0217% ( 2) 00:10:00.385 12528.167 - 12580.806: 99.0428% ( 3) 00:10:00.385 12580.806 - 12633.446: 99.0569% ( 2) 00:10:00.385 12633.446 - 12686.085: 99.0780% ( 3) 00:10:00.385 12686.085 - 12738.724: 99.0921% ( 2) 00:10:00.385 12738.724 - 12791.364: 99.0991% ( 1) 00:10:00.385 26635.515 - 26740.794: 99.1061% ( 1) 00:10:00.385 26740.794 - 26846.072: 99.1273% ( 3) 00:10:00.385 26846.072 - 26951.351: 99.1624% ( 5) 00:10:00.385 26951.351 - 27161.908: 99.2117% ( 7) 00:10:00.385 27161.908 - 27372.466: 99.2751% ( 9) 00:10:00.385 27372.466 - 27583.023: 99.3243% ( 7) 00:10:00.385 27583.023 - 27793.581: 99.3877% ( 9) 00:10:00.385 27793.581 - 28004.138: 99.4440% ( 8) 00:10:00.385 28004.138 - 28214.696: 99.5073% ( 9) 00:10:00.385 28214.696 - 28425.253: 99.5495% ( 6) 00:10:00.385 33478.631 - 33689.189: 99.5777% ( 4) 00:10:00.385 33689.189 - 33899.746: 99.6340% ( 8) 00:10:00.385 33899.746 - 34110.304: 99.6903% ( 8) 00:10:00.385 34110.304 - 34320.861: 99.7537% ( 9) 00:10:00.385 34320.861 - 34531.418: 99.8029% ( 7) 00:10:00.385 34531.418 - 34741.976: 99.8592% ( 8) 00:10:00.385 34741.976 - 34952.533: 99.9155% ( 8) 00:10:00.385 34952.533 - 35163.091: 99.9718% ( 8) 00:10:00.385 35163.091 - 35373.648: 100.0000% ( 4) 00:10:00.385 00:10:00.385 03:54:42 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:10:01.767 Initializing NVMe Controllers 00:10:01.767 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:01.767 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:01.767 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:01.767 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:01.767 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:01.768 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:01.768 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:01.768 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:01.768 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:01.768 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:01.768 Initialization complete. Launching workers. 00:10:01.768 ======================================================== 00:10:01.768 Latency(us) 00:10:01.768 Device Information : IOPS MiB/s Average min max 00:10:01.768 PCIE (0000:00:10.0) NSID 1 from core 0: 11874.91 139.16 10807.05 8107.34 44621.67 00:10:01.768 PCIE (0000:00:11.0) NSID 1 from core 0: 11874.91 139.16 10792.26 7950.40 43436.25 00:10:01.768 PCIE (0000:00:13.0) NSID 1 from core 0: 11874.91 139.16 10775.79 8328.75 42630.06 00:10:01.768 PCIE (0000:00:12.0) NSID 1 from core 0: 11874.91 139.16 10759.09 8387.25 41332.11 00:10:01.768 PCIE (0000:00:12.0) NSID 2 from core 0: 11874.91 139.16 10743.87 8172.54 39898.64 00:10:01.768 PCIE (0000:00:12.0) NSID 3 from core 0: 11938.75 139.91 10671.26 8425.57 30210.05 00:10:01.768 ======================================================== 00:10:01.768 Total : 71313.28 835.70 10758.14 7950.40 44621.67 00:10:01.768 00:10:01.768 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:01.768 ================================================================================= 00:10:01.768 1.00000% : 8790.773us 00:10:01.768 10.00000% : 9211.888us 00:10:01.768 25.00000% : 9527.724us 00:10:01.768 50.00000% : 9896.199us 00:10:01.768 75.00000% : 10580.511us 00:10:01.768 90.00000% : 13896.790us 00:10:01.768 95.00000% : 15581.250us 00:10:01.768 98.00000% : 17686.824us 00:10:01.768 99.00000% : 32215.287us 00:10:01.768 99.50000% : 42111.486us 00:10:01.768 99.90000% : 44217.060us 00:10:01.768 99.99000% : 44638.175us 00:10:01.768 99.99900% : 44638.175us 00:10:01.768 99.99990% : 44638.175us 00:10:01.768 99.99999% : 44638.175us 00:10:01.768 00:10:01.768 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:01.768 ================================================================================= 00:10:01.768 1.00000% : 8790.773us 00:10:01.768 10.00000% : 9211.888us 00:10:01.768 25.00000% : 9527.724us 00:10:01.768 50.00000% : 9896.199us 00:10:01.768 75.00000% : 10580.511us 00:10:01.768 90.00000% : 14002.069us 00:10:01.768 95.00000% : 15475.971us 00:10:01.768 98.00000% : 18318.496us 00:10:01.768 99.00000% : 31583.614us 00:10:01.768 99.50000% : 41058.699us 00:10:01.768 99.90000% : 43164.273us 00:10:01.768 99.99000% : 43585.388us 00:10:01.768 99.99900% : 43585.388us 00:10:01.768 99.99990% : 43585.388us 00:10:01.768 99.99999% : 43585.388us 00:10:01.768 00:10:01.768 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:01.768 ================================================================================= 00:10:01.768 1.00000% : 8790.773us 00:10:01.768 10.00000% : 9264.527us 00:10:01.768 25.00000% : 9527.724us 00:10:01.768 50.00000% : 9843.560us 00:10:01.768 75.00000% : 10580.511us 00:10:01.768 90.00000% : 13686.233us 00:10:01.768 95.00000% : 15897.086us 00:10:01.768 98.00000% : 17897.382us 00:10:01.768 99.00000% : 30530.827us 00:10:01.768 99.50000% : 40427.027us 00:10:01.768 99.90000% : 42322.043us 00:10:01.768 99.99000% : 42743.158us 00:10:01.768 99.99900% : 42743.158us 00:10:01.768 99.99990% : 42743.158us 00:10:01.768 99.99999% : 42743.158us 00:10:01.768 00:10:01.768 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:01.768 ================================================================================= 00:10:01.768 1.00000% : 8843.412us 00:10:01.768 10.00000% : 9264.527us 00:10:01.768 25.00000% : 9527.724us 00:10:01.768 50.00000% : 9896.199us 00:10:01.768 75.00000% : 10580.511us 00:10:01.768 90.00000% : 13475.676us 00:10:01.768 95.00000% : 15791.807us 00:10:01.768 98.00000% : 17370.988us 00:10:01.768 99.00000% : 29267.483us 00:10:01.768 99.50000% : 39163.682us 00:10:01.768 99.90000% : 41058.699us 00:10:01.768 99.99000% : 41479.814us 00:10:01.768 99.99900% : 41479.814us 00:10:01.768 99.99990% : 41479.814us 00:10:01.768 99.99999% : 41479.814us 00:10:01.768 00:10:01.768 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:01.768 ================================================================================= 00:10:01.768 1.00000% : 8790.773us 00:10:01.768 10.00000% : 9264.527us 00:10:01.768 25.00000% : 9527.724us 00:10:01.768 50.00000% : 9896.199us 00:10:01.768 75.00000% : 10633.150us 00:10:01.768 90.00000% : 13265.118us 00:10:01.768 95.00000% : 15686.529us 00:10:01.768 98.00000% : 17265.709us 00:10:01.768 99.00000% : 28214.696us 00:10:01.768 99.50000% : 37900.337us 00:10:01.768 99.90000% : 39584.797us 00:10:01.768 99.99000% : 40005.912us 00:10:01.768 99.99900% : 40005.912us 00:10:01.768 99.99990% : 40005.912us 00:10:01.768 99.99999% : 40005.912us 00:10:01.768 00:10:01.768 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:01.768 ================================================================================= 00:10:01.768 1.00000% : 8843.412us 00:10:01.768 10.00000% : 9264.527us 00:10:01.768 25.00000% : 9527.724us 00:10:01.768 50.00000% : 9896.199us 00:10:01.768 75.00000% : 10685.790us 00:10:01.768 90.00000% : 13791.512us 00:10:01.768 95.00000% : 15791.807us 00:10:01.768 98.00000% : 17476.267us 00:10:01.768 99.00000% : 18950.169us 00:10:01.768 99.50000% : 28004.138us 00:10:01.768 99.90000% : 29899.155us 00:10:01.768 99.99000% : 30320.270us 00:10:01.768 99.99900% : 30320.270us 00:10:01.768 99.99990% : 30320.270us 00:10:01.768 99.99999% : 30320.270us 00:10:01.768 00:10:01.768 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:10:01.768 ============================================================================== 00:10:01.768 Range in us Cumulative IO count 00:10:01.768 8106.461 - 8159.100: 0.0084% ( 1) 00:10:01.768 8211.740 - 8264.379: 0.0252% ( 2) 00:10:01.768 8264.379 - 8317.018: 0.1092% ( 10) 00:10:01.768 8317.018 - 8369.658: 0.1260% ( 2) 00:10:01.768 8369.658 - 8422.297: 0.2016% ( 9) 00:10:01.768 8422.297 - 8474.937: 0.3528% ( 18) 00:10:01.768 8474.937 - 8527.576: 0.4788% ( 15) 00:10:01.768 8527.576 - 8580.215: 0.5208% ( 5) 00:10:01.768 8580.215 - 8632.855: 0.5880% ( 8) 00:10:01.768 8632.855 - 8685.494: 0.7224% ( 16) 00:10:01.768 8685.494 - 8738.133: 0.9073% ( 22) 00:10:01.768 8738.133 - 8790.773: 1.2097% ( 36) 00:10:01.768 8790.773 - 8843.412: 1.5961% ( 46) 00:10:01.768 8843.412 - 8896.051: 2.2765% ( 81) 00:10:01.768 8896.051 - 8948.691: 3.0914% ( 97) 00:10:01.768 8948.691 - 9001.330: 4.2171% ( 134) 00:10:01.768 9001.330 - 9053.969: 5.5696% ( 161) 00:10:01.768 9053.969 - 9106.609: 7.3505% ( 212) 00:10:01.768 9106.609 - 9159.248: 9.0810% ( 206) 00:10:01.768 9159.248 - 9211.888: 10.9459% ( 222) 00:10:01.768 9211.888 - 9264.527: 13.1132% ( 258) 00:10:01.768 9264.527 - 9317.166: 15.0874% ( 235) 00:10:01.768 9317.166 - 9369.806: 17.3975% ( 275) 00:10:01.768 9369.806 - 9422.445: 19.9261% ( 301) 00:10:01.768 9422.445 - 9475.084: 23.0175% ( 368) 00:10:01.768 9475.084 - 9527.724: 27.1001% ( 486) 00:10:01.768 9527.724 - 9580.363: 31.5104% ( 525) 00:10:01.768 9580.363 - 9633.002: 36.2819% ( 568) 00:10:01.768 9633.002 - 9685.642: 40.2050% ( 467) 00:10:01.769 9685.642 - 9738.281: 43.6828% ( 414) 00:10:01.769 9738.281 - 9790.920: 46.7322% ( 363) 00:10:01.769 9790.920 - 9843.560: 49.7648% ( 361) 00:10:01.769 9843.560 - 9896.199: 52.6462% ( 343) 00:10:01.769 9896.199 - 9948.839: 55.3175% ( 318) 00:10:01.769 9948.839 - 10001.478: 57.9721% ( 316) 00:10:01.769 10001.478 - 10054.117: 60.2487% ( 271) 00:10:01.769 10054.117 - 10106.757: 62.4160% ( 258) 00:10:01.769 10106.757 - 10159.396: 64.3649% ( 232) 00:10:01.769 10159.396 - 10212.035: 65.9022% ( 183) 00:10:01.769 10212.035 - 10264.675: 67.3135% ( 168) 00:10:01.769 10264.675 - 10317.314: 68.6492% ( 159) 00:10:01.769 10317.314 - 10369.953: 69.8757% ( 146) 00:10:01.769 10369.953 - 10422.593: 71.5558% ( 200) 00:10:01.769 10422.593 - 10475.232: 73.0595% ( 179) 00:10:01.769 10475.232 - 10527.871: 74.1347% ( 128) 00:10:01.769 10527.871 - 10580.511: 75.0000% ( 103) 00:10:01.769 10580.511 - 10633.150: 75.6804% ( 81) 00:10:01.769 10633.150 - 10685.790: 76.3861% ( 84) 00:10:01.769 10685.790 - 10738.429: 76.9153% ( 63) 00:10:01.769 10738.429 - 10791.068: 77.5790% ( 79) 00:10:01.769 10791.068 - 10843.708: 78.3266% ( 89) 00:10:01.769 10843.708 - 10896.347: 78.8306% ( 60) 00:10:01.769 10896.347 - 10948.986: 79.3095% ( 57) 00:10:01.769 10948.986 - 11001.626: 79.7463% ( 52) 00:10:01.769 11001.626 - 11054.265: 80.2503% ( 60) 00:10:01.769 11054.265 - 11106.904: 80.7040% ( 54) 00:10:01.769 11106.904 - 11159.544: 81.0232% ( 38) 00:10:01.769 11159.544 - 11212.183: 81.4012% ( 45) 00:10:01.769 11212.183 - 11264.822: 81.8548% ( 54) 00:10:01.769 11264.822 - 11317.462: 82.1993% ( 41) 00:10:01.769 11317.462 - 11370.101: 82.5941% ( 47) 00:10:01.769 11370.101 - 11422.741: 83.0225% ( 51) 00:10:01.769 11422.741 - 11475.380: 83.4929% ( 56) 00:10:01.769 11475.380 - 11528.019: 83.6778% ( 22) 00:10:01.769 11528.019 - 11580.659: 83.8878% ( 25) 00:10:01.769 11580.659 - 11633.298: 84.0810% ( 23) 00:10:01.769 11633.298 - 11685.937: 84.2406% ( 19) 00:10:01.769 11685.937 - 11738.577: 84.4170% ( 21) 00:10:01.769 11738.577 - 11791.216: 84.6186% ( 24) 00:10:01.769 11791.216 - 11843.855: 84.8118% ( 23) 00:10:01.769 11843.855 - 11896.495: 84.9798% ( 20) 00:10:01.769 11896.495 - 11949.134: 85.1647% ( 22) 00:10:01.769 11949.134 - 12001.773: 85.3243% ( 19) 00:10:01.769 12001.773 - 12054.413: 85.5259% ( 24) 00:10:01.769 12054.413 - 12107.052: 85.6939% ( 20) 00:10:01.769 12107.052 - 12159.692: 85.9375% ( 29) 00:10:01.769 12159.692 - 12212.331: 86.2399% ( 36) 00:10:01.769 12212.331 - 12264.970: 86.5003% ( 31) 00:10:01.769 12264.970 - 12317.610: 86.6935% ( 23) 00:10:01.769 12317.610 - 12370.249: 86.9288% ( 28) 00:10:01.769 12370.249 - 12422.888: 87.0800% ( 18) 00:10:01.769 12422.888 - 12475.528: 87.1976% ( 14) 00:10:01.769 12475.528 - 12528.167: 87.3068% ( 13) 00:10:01.769 12528.167 - 12580.806: 87.4160% ( 13) 00:10:01.769 12580.806 - 12633.446: 87.5168% ( 12) 00:10:01.769 12633.446 - 12686.085: 87.5924% ( 9) 00:10:01.769 12686.085 - 12738.724: 87.6680% ( 9) 00:10:01.769 12738.724 - 12791.364: 87.7436% ( 9) 00:10:01.769 12791.364 - 12844.003: 87.8192% ( 9) 00:10:01.769 12844.003 - 12896.643: 87.9452% ( 15) 00:10:01.769 12896.643 - 12949.282: 88.0628% ( 14) 00:10:01.769 12949.282 - 13001.921: 88.1636% ( 12) 00:10:01.769 13001.921 - 13054.561: 88.2897% ( 15) 00:10:01.769 13054.561 - 13107.200: 88.4073% ( 14) 00:10:01.769 13107.200 - 13159.839: 88.5081% ( 12) 00:10:01.769 13159.839 - 13212.479: 88.6761% ( 20) 00:10:01.769 13212.479 - 13265.118: 88.8021% ( 15) 00:10:01.769 13265.118 - 13317.757: 88.9113% ( 13) 00:10:01.769 13317.757 - 13370.397: 88.9701% ( 7) 00:10:01.769 13370.397 - 13423.036: 89.0457% ( 9) 00:10:01.769 13423.036 - 13475.676: 89.1549% ( 13) 00:10:01.769 13475.676 - 13580.954: 89.2893% ( 16) 00:10:01.769 13580.954 - 13686.233: 89.5581% ( 32) 00:10:01.769 13686.233 - 13791.512: 89.7261% ( 20) 00:10:01.769 13791.512 - 13896.790: 90.0202% ( 35) 00:10:01.769 13896.790 - 14002.069: 90.2722% ( 30) 00:10:01.769 14002.069 - 14107.348: 90.5998% ( 39) 00:10:01.769 14107.348 - 14212.627: 90.9526% ( 42) 00:10:01.769 14212.627 - 14317.905: 91.3222% ( 44) 00:10:01.769 14317.905 - 14423.184: 91.5575% ( 28) 00:10:01.769 14423.184 - 14528.463: 91.7675% ( 25) 00:10:01.769 14528.463 - 14633.741: 92.1539% ( 46) 00:10:01.769 14633.741 - 14739.020: 92.5319% ( 45) 00:10:01.769 14739.020 - 14844.299: 92.9183% ( 46) 00:10:01.769 14844.299 - 14949.578: 93.1116% ( 23) 00:10:01.769 14949.578 - 15054.856: 93.3300% ( 26) 00:10:01.769 15054.856 - 15160.135: 93.5148% ( 22) 00:10:01.769 15160.135 - 15265.414: 93.7920% ( 33) 00:10:01.769 15265.414 - 15370.692: 94.1700% ( 45) 00:10:01.769 15370.692 - 15475.971: 94.7077% ( 64) 00:10:01.769 15475.971 - 15581.250: 95.0437% ( 40) 00:10:01.769 15581.250 - 15686.529: 95.3125% ( 32) 00:10:01.769 15686.529 - 15791.807: 95.5141% ( 24) 00:10:01.769 15791.807 - 15897.086: 95.7661% ( 30) 00:10:01.769 15897.086 - 16002.365: 95.9509% ( 22) 00:10:01.769 16002.365 - 16107.643: 96.1358% ( 22) 00:10:01.769 16107.643 - 16212.922: 96.2702% ( 16) 00:10:01.769 16212.922 - 16318.201: 96.4718% ( 24) 00:10:01.769 16318.201 - 16423.480: 96.5810% ( 13) 00:10:01.769 16423.480 - 16528.758: 96.6986% ( 14) 00:10:01.769 16528.758 - 16634.037: 96.7994% ( 12) 00:10:01.769 16634.037 - 16739.316: 97.1102% ( 37) 00:10:01.769 16739.316 - 16844.594: 97.3034% ( 23) 00:10:01.769 16844.594 - 16949.873: 97.4126% ( 13) 00:10:01.769 16949.873 - 17055.152: 97.5386% ( 15) 00:10:01.769 17055.152 - 17160.431: 97.6058% ( 8) 00:10:01.769 17160.431 - 17265.709: 97.6478% ( 5) 00:10:01.769 17265.709 - 17370.988: 97.7067% ( 7) 00:10:01.769 17370.988 - 17476.267: 97.7571% ( 6) 00:10:01.769 17476.267 - 17581.545: 97.8663% ( 13) 00:10:01.769 17581.545 - 17686.824: 98.0343% ( 20) 00:10:01.769 17686.824 - 17792.103: 98.1015% ( 8) 00:10:01.769 17792.103 - 17897.382: 98.2443% ( 17) 00:10:01.769 17897.382 - 18002.660: 98.3451% ( 12) 00:10:01.769 18002.660 - 18107.939: 98.5131% ( 20) 00:10:01.769 18107.939 - 18213.218: 98.6055% ( 11) 00:10:01.769 18213.218 - 18318.496: 98.7147% ( 13) 00:10:01.769 18318.496 - 18423.775: 98.8407% ( 15) 00:10:01.769 18423.775 - 18529.054: 98.8575% ( 2) 00:10:01.769 18529.054 - 18634.333: 98.8743% ( 2) 00:10:01.769 18634.333 - 18739.611: 98.9247% ( 6) 00:10:01.769 32004.729 - 32215.287: 99.0087% ( 10) 00:10:01.769 32215.287 - 32425.844: 99.0759% ( 8) 00:10:01.769 32425.844 - 32636.402: 99.1515% ( 9) 00:10:01.769 32636.402 - 32846.959: 99.2019% ( 6) 00:10:01.769 32846.959 - 33057.516: 99.2692% ( 8) 00:10:01.769 33057.516 - 33268.074: 99.3280% ( 7) 00:10:01.769 33268.074 - 33478.631: 99.4120% ( 10) 00:10:01.769 33478.631 - 33689.189: 99.4624% ( 6) 00:10:01.769 41690.371 - 41900.929: 99.4792% ( 2) 00:10:01.769 41900.929 - 42111.486: 99.5212% ( 5) 00:10:01.769 42111.486 - 42322.043: 99.5632% ( 5) 00:10:01.769 42322.043 - 42532.601: 99.6052% ( 5) 00:10:01.769 42532.601 - 42743.158: 99.6304% ( 3) 00:10:01.769 42743.158 - 42953.716: 99.6724% ( 5) 00:10:01.769 42953.716 - 43164.273: 99.7228% ( 6) 00:10:01.769 43164.273 - 43374.831: 99.7564% ( 4) 00:10:01.769 43374.831 - 43585.388: 99.7984% ( 5) 00:10:01.769 43585.388 - 43795.945: 99.8320% ( 4) 00:10:01.769 43795.945 - 44006.503: 99.8740% ( 5) 00:10:01.769 44006.503 - 44217.060: 99.9160% ( 5) 00:10:01.769 44217.060 - 44427.618: 99.9580% ( 5) 00:10:01.769 44427.618 - 44638.175: 100.0000% ( 5) 00:10:01.769 00:10:01.770 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:10:01.770 ============================================================================== 00:10:01.770 Range in us Cumulative IO count 00:10:01.770 7948.543 - 8001.182: 0.0084% ( 1) 00:10:01.770 8053.822 - 8106.461: 0.0420% ( 4) 00:10:01.770 8106.461 - 8159.100: 0.0756% ( 4) 00:10:01.770 8159.100 - 8211.740: 0.1176% ( 5) 00:10:01.770 8211.740 - 8264.379: 0.1512% ( 4) 00:10:01.770 8264.379 - 8317.018: 0.1764% ( 3) 00:10:01.770 8369.658 - 8422.297: 0.1932% ( 2) 00:10:01.770 8422.297 - 8474.937: 0.2268% ( 4) 00:10:01.770 8474.937 - 8527.576: 0.2940% ( 8) 00:10:01.770 8527.576 - 8580.215: 0.3948% ( 12) 00:10:01.770 8580.215 - 8632.855: 0.5460% ( 18) 00:10:01.770 8632.855 - 8685.494: 0.7392% ( 23) 00:10:01.770 8685.494 - 8738.133: 0.9661% ( 27) 00:10:01.770 8738.133 - 8790.773: 1.3105% ( 41) 00:10:01.770 8790.773 - 8843.412: 1.6297% ( 38) 00:10:01.770 8843.412 - 8896.051: 2.3353% ( 84) 00:10:01.770 8896.051 - 8948.691: 3.2846% ( 113) 00:10:01.770 8948.691 - 9001.330: 4.3599% ( 128) 00:10:01.770 9001.330 - 9053.969: 5.2251% ( 103) 00:10:01.770 9053.969 - 9106.609: 6.6196% ( 166) 00:10:01.770 9106.609 - 9159.248: 8.5685% ( 232) 00:10:01.770 9159.248 - 9211.888: 10.7779% ( 263) 00:10:01.770 9211.888 - 9264.527: 13.6425% ( 341) 00:10:01.770 9264.527 - 9317.166: 16.0114% ( 282) 00:10:01.770 9317.166 - 9369.806: 18.6828% ( 318) 00:10:01.770 9369.806 - 9422.445: 21.2786% ( 309) 00:10:01.770 9422.445 - 9475.084: 24.3784% ( 369) 00:10:01.770 9475.084 - 9527.724: 27.7638% ( 403) 00:10:01.770 9527.724 - 9580.363: 31.0820% ( 395) 00:10:01.770 9580.363 - 9633.002: 35.5847% ( 536) 00:10:01.770 9633.002 - 9685.642: 39.5665% ( 474) 00:10:01.770 9685.642 - 9738.281: 43.5064% ( 469) 00:10:01.770 9738.281 - 9790.920: 46.7154% ( 382) 00:10:01.770 9790.920 - 9843.560: 49.9076% ( 380) 00:10:01.770 9843.560 - 9896.199: 52.7890% ( 343) 00:10:01.770 9896.199 - 9948.839: 55.5444% ( 328) 00:10:01.770 9948.839 - 10001.478: 57.6781% ( 254) 00:10:01.770 10001.478 - 10054.117: 59.9798% ( 274) 00:10:01.770 10054.117 - 10106.757: 62.0548% ( 247) 00:10:01.770 10106.757 - 10159.396: 63.9869% ( 230) 00:10:01.770 10159.396 - 10212.035: 65.6754% ( 201) 00:10:01.770 10212.035 - 10264.675: 67.3555% ( 200) 00:10:01.770 10264.675 - 10317.314: 68.5904% ( 147) 00:10:01.770 10317.314 - 10369.953: 70.1025% ( 180) 00:10:01.770 10369.953 - 10422.593: 71.5474% ( 172) 00:10:01.770 10422.593 - 10475.232: 73.0595% ( 180) 00:10:01.770 10475.232 - 10527.871: 74.1935% ( 135) 00:10:01.770 10527.871 - 10580.511: 75.2268% ( 123) 00:10:01.770 10580.511 - 10633.150: 76.1257% ( 107) 00:10:01.770 10633.150 - 10685.790: 76.9993% ( 104) 00:10:01.770 10685.790 - 10738.429: 77.8646% ( 103) 00:10:01.770 10738.429 - 10791.068: 78.4442% ( 69) 00:10:01.770 10791.068 - 10843.708: 78.8222% ( 45) 00:10:01.770 10843.708 - 10896.347: 79.5279% ( 84) 00:10:01.770 10896.347 - 10948.986: 79.8807% ( 42) 00:10:01.770 10948.986 - 11001.626: 80.3511% ( 56) 00:10:01.770 11001.626 - 11054.265: 80.9224% ( 68) 00:10:01.770 11054.265 - 11106.904: 81.1996% ( 33) 00:10:01.770 11106.904 - 11159.544: 81.4936% ( 35) 00:10:01.770 11159.544 - 11212.183: 81.8968% ( 48) 00:10:01.770 11212.183 - 11264.822: 82.2665% ( 44) 00:10:01.770 11264.822 - 11317.462: 82.4933% ( 27) 00:10:01.770 11317.462 - 11370.101: 82.6193% ( 15) 00:10:01.770 11370.101 - 11422.741: 82.8293% ( 25) 00:10:01.770 11422.741 - 11475.380: 82.9973% ( 20) 00:10:01.770 11475.380 - 11528.019: 83.1653% ( 20) 00:10:01.770 11528.019 - 11580.659: 83.3333% ( 20) 00:10:01.770 11580.659 - 11633.298: 83.6694% ( 40) 00:10:01.770 11633.298 - 11685.937: 83.8206% ( 18) 00:10:01.770 11685.937 - 11738.577: 83.9634% ( 17) 00:10:01.770 11738.577 - 11791.216: 84.0978% ( 16) 00:10:01.770 11791.216 - 11843.855: 84.3078% ( 25) 00:10:01.770 11843.855 - 11896.495: 84.4338% ( 15) 00:10:01.770 11896.495 - 11949.134: 84.5766% ( 17) 00:10:01.770 11949.134 - 12001.773: 84.7782% ( 24) 00:10:01.770 12001.773 - 12054.413: 85.1562% ( 45) 00:10:01.770 12054.413 - 12107.052: 85.7275% ( 68) 00:10:01.770 12107.052 - 12159.692: 86.2819% ( 66) 00:10:01.770 12159.692 - 12212.331: 86.6935% ( 49) 00:10:01.770 12212.331 - 12264.970: 87.0044% ( 37) 00:10:01.770 12264.970 - 12317.610: 87.4076% ( 48) 00:10:01.770 12317.610 - 12370.249: 87.6260% ( 26) 00:10:01.770 12370.249 - 12422.888: 87.8192% ( 23) 00:10:01.770 12422.888 - 12475.528: 87.9452% ( 15) 00:10:01.770 12475.528 - 12528.167: 88.0460% ( 12) 00:10:01.770 12528.167 - 12580.806: 88.1132% ( 8) 00:10:01.770 12580.806 - 12633.446: 88.1804% ( 8) 00:10:01.770 12633.446 - 12686.085: 88.2056% ( 3) 00:10:01.770 12686.085 - 12738.724: 88.2476% ( 5) 00:10:01.770 12738.724 - 12791.364: 88.3065% ( 7) 00:10:01.770 12791.364 - 12844.003: 88.3737% ( 8) 00:10:01.770 12844.003 - 12896.643: 88.4913% ( 14) 00:10:01.770 12896.643 - 12949.282: 88.5921% ( 12) 00:10:01.770 12949.282 - 13001.921: 88.7013% ( 13) 00:10:01.770 13001.921 - 13054.561: 88.8021% ( 12) 00:10:01.770 13054.561 - 13107.200: 88.8525% ( 6) 00:10:01.770 13107.200 - 13159.839: 89.0373% ( 22) 00:10:01.770 13159.839 - 13212.479: 89.1381% ( 12) 00:10:01.770 13212.479 - 13265.118: 89.1801% ( 5) 00:10:01.770 13265.118 - 13317.757: 89.2137% ( 4) 00:10:01.770 13317.757 - 13370.397: 89.2809% ( 8) 00:10:01.770 13370.397 - 13423.036: 89.3397% ( 7) 00:10:01.770 13423.036 - 13475.676: 89.3817% ( 5) 00:10:01.770 13475.676 - 13580.954: 89.5077% ( 15) 00:10:01.770 13580.954 - 13686.233: 89.6421% ( 16) 00:10:01.770 13686.233 - 13791.512: 89.7093% ( 8) 00:10:01.770 13791.512 - 13896.790: 89.7765% ( 8) 00:10:01.770 13896.790 - 14002.069: 90.0706% ( 35) 00:10:01.770 14002.069 - 14107.348: 90.4402% ( 44) 00:10:01.770 14107.348 - 14212.627: 90.8014% ( 43) 00:10:01.770 14212.627 - 14317.905: 90.9946% ( 23) 00:10:01.770 14317.905 - 14423.184: 91.1626% ( 20) 00:10:01.770 14423.184 - 14528.463: 91.4062% ( 29) 00:10:01.770 14528.463 - 14633.741: 91.6919% ( 34) 00:10:01.770 14633.741 - 14739.020: 92.0531% ( 43) 00:10:01.770 14739.020 - 14844.299: 92.6327% ( 69) 00:10:01.770 14844.299 - 14949.578: 93.0864% ( 54) 00:10:01.770 14949.578 - 15054.856: 93.5148% ( 51) 00:10:01.770 15054.856 - 15160.135: 93.7920% ( 33) 00:10:01.770 15160.135 - 15265.414: 94.3296% ( 64) 00:10:01.770 15265.414 - 15370.692: 94.6825% ( 42) 00:10:01.770 15370.692 - 15475.971: 95.0017% ( 38) 00:10:01.770 15475.971 - 15581.250: 95.2789% ( 33) 00:10:01.770 15581.250 - 15686.529: 95.6317% ( 42) 00:10:01.770 15686.529 - 15791.807: 96.0601% ( 51) 00:10:01.770 15791.807 - 15897.086: 96.3962% ( 40) 00:10:01.770 15897.086 - 16002.365: 96.5894% ( 23) 00:10:01.770 16002.365 - 16107.643: 96.7070% ( 14) 00:10:01.770 16107.643 - 16212.922: 96.8246% ( 14) 00:10:01.770 16212.922 - 16318.201: 96.8414% ( 2) 00:10:01.770 16318.201 - 16423.480: 96.8750% ( 4) 00:10:01.770 16423.480 - 16528.758: 96.9338% ( 7) 00:10:01.770 16528.758 - 16634.037: 96.9758% ( 5) 00:10:01.770 16634.037 - 16739.316: 97.0598% ( 10) 00:10:01.770 16739.316 - 16844.594: 97.3874% ( 39) 00:10:01.770 16844.594 - 16949.873: 97.4798% ( 11) 00:10:01.771 16949.873 - 17055.152: 97.5050% ( 3) 00:10:01.771 17055.152 - 17160.431: 97.5302% ( 3) 00:10:01.771 17160.431 - 17265.709: 97.5638% ( 4) 00:10:01.771 17265.709 - 17370.988: 97.6142% ( 6) 00:10:01.771 17370.988 - 17476.267: 97.6562% ( 5) 00:10:01.771 17476.267 - 17581.545: 97.7067% ( 6) 00:10:01.771 17581.545 - 17686.824: 97.7487% ( 5) 00:10:01.771 17686.824 - 17792.103: 97.7907% ( 5) 00:10:01.771 17792.103 - 17897.382: 97.8411% ( 6) 00:10:01.771 17897.382 - 18002.660: 97.8495% ( 1) 00:10:01.771 18107.939 - 18213.218: 97.8831% ( 4) 00:10:01.771 18213.218 - 18318.496: 98.0847% ( 24) 00:10:01.771 18318.496 - 18423.775: 98.2863% ( 24) 00:10:01.771 18423.775 - 18529.054: 98.3871% ( 12) 00:10:01.771 18529.054 - 18634.333: 98.4627% ( 9) 00:10:01.771 18634.333 - 18739.611: 98.5551% ( 11) 00:10:01.771 18739.611 - 18844.890: 98.6727% ( 14) 00:10:01.771 18844.890 - 18950.169: 98.7903% ( 14) 00:10:01.771 18950.169 - 19055.447: 98.8911% ( 12) 00:10:01.771 19055.447 - 19160.726: 98.9247% ( 4) 00:10:01.771 30951.942 - 31162.500: 98.9415% ( 2) 00:10:01.771 31162.500 - 31373.057: 98.9835% ( 5) 00:10:01.771 31373.057 - 31583.614: 99.0255% ( 5) 00:10:01.771 31583.614 - 31794.172: 99.0759% ( 6) 00:10:01.771 31794.172 - 32004.729: 99.1179% ( 5) 00:10:01.771 32004.729 - 32215.287: 99.1683% ( 6) 00:10:01.771 32215.287 - 32425.844: 99.2103% ( 5) 00:10:01.771 32425.844 - 32636.402: 99.2608% ( 6) 00:10:01.771 32636.402 - 32846.959: 99.3028% ( 5) 00:10:01.771 32846.959 - 33057.516: 99.3448% ( 5) 00:10:01.771 33057.516 - 33268.074: 99.3784% ( 4) 00:10:01.771 33268.074 - 33478.631: 99.4204% ( 5) 00:10:01.771 33478.631 - 33689.189: 99.4624% ( 5) 00:10:01.771 40848.141 - 41058.699: 99.5044% ( 5) 00:10:01.771 41058.699 - 41269.256: 99.5380% ( 4) 00:10:01.771 41269.256 - 41479.814: 99.5884% ( 6) 00:10:01.771 41479.814 - 41690.371: 99.6220% ( 4) 00:10:01.771 41690.371 - 41900.929: 99.6724% ( 6) 00:10:01.771 41900.929 - 42111.486: 99.7060% ( 4) 00:10:01.771 42111.486 - 42322.043: 99.7564% ( 6) 00:10:01.771 42322.043 - 42532.601: 99.7984% ( 5) 00:10:01.771 42532.601 - 42743.158: 99.8488% ( 6) 00:10:01.771 42743.158 - 42953.716: 99.8908% ( 5) 00:10:01.771 42953.716 - 43164.273: 99.9412% ( 6) 00:10:01.771 43164.273 - 43374.831: 99.9832% ( 5) 00:10:01.771 43374.831 - 43585.388: 100.0000% ( 2) 00:10:01.771 00:10:01.771 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:10:01.771 ============================================================================== 00:10:01.771 Range in us Cumulative IO count 00:10:01.771 8317.018 - 8369.658: 0.0084% ( 1) 00:10:01.771 8369.658 - 8422.297: 0.0252% ( 2) 00:10:01.771 8422.297 - 8474.937: 0.0588% ( 4) 00:10:01.771 8474.937 - 8527.576: 0.1260% ( 8) 00:10:01.771 8527.576 - 8580.215: 0.2520% ( 15) 00:10:01.771 8580.215 - 8632.855: 0.5376% ( 34) 00:10:01.771 8632.855 - 8685.494: 0.6720% ( 16) 00:10:01.771 8685.494 - 8738.133: 0.9493% ( 33) 00:10:01.771 8738.133 - 8790.773: 1.2853% ( 40) 00:10:01.771 8790.773 - 8843.412: 1.6549% ( 44) 00:10:01.771 8843.412 - 8896.051: 2.3774% ( 86) 00:10:01.771 8896.051 - 8948.691: 2.8394% ( 55) 00:10:01.771 8948.691 - 9001.330: 3.4610% ( 74) 00:10:01.771 9001.330 - 9053.969: 4.2927% ( 99) 00:10:01.771 9053.969 - 9106.609: 5.6872% ( 166) 00:10:01.771 9106.609 - 9159.248: 7.3841% ( 202) 00:10:01.771 9159.248 - 9211.888: 9.6186% ( 266) 00:10:01.771 9211.888 - 9264.527: 12.0884% ( 294) 00:10:01.771 9264.527 - 9317.166: 14.8690% ( 331) 00:10:01.771 9317.166 - 9369.806: 17.8259% ( 352) 00:10:01.771 9369.806 - 9422.445: 20.7073% ( 343) 00:10:01.771 9422.445 - 9475.084: 23.8995% ( 380) 00:10:01.771 9475.084 - 9527.724: 27.5622% ( 436) 00:10:01.771 9527.724 - 9580.363: 31.6196% ( 483) 00:10:01.771 9580.363 - 9633.002: 35.3999% ( 450) 00:10:01.771 9633.002 - 9685.642: 39.6589% ( 507) 00:10:01.771 9685.642 - 9738.281: 42.9856% ( 396) 00:10:01.771 9738.281 - 9790.920: 46.9422% ( 471) 00:10:01.771 9790.920 - 9843.560: 50.6132% ( 437) 00:10:01.771 9843.560 - 9896.199: 53.4190% ( 334) 00:10:01.771 9896.199 - 9948.839: 55.5612% ( 255) 00:10:01.771 9948.839 - 10001.478: 57.7957% ( 266) 00:10:01.771 10001.478 - 10054.117: 59.7026% ( 227) 00:10:01.771 10054.117 - 10106.757: 61.4079% ( 203) 00:10:01.771 10106.757 - 10159.396: 63.0460% ( 195) 00:10:01.771 10159.396 - 10212.035: 64.8522% ( 215) 00:10:01.771 10212.035 - 10264.675: 66.8935% ( 243) 00:10:01.771 10264.675 - 10317.314: 68.3468% ( 173) 00:10:01.771 10317.314 - 10369.953: 69.6405% ( 154) 00:10:01.771 10369.953 - 10422.593: 70.8249% ( 141) 00:10:01.771 10422.593 - 10475.232: 72.5722% ( 208) 00:10:01.771 10475.232 - 10527.871: 73.9667% ( 166) 00:10:01.771 10527.871 - 10580.511: 75.0168% ( 125) 00:10:01.771 10580.511 - 10633.150: 75.8317% ( 97) 00:10:01.771 10633.150 - 10685.790: 76.6885% ( 102) 00:10:01.771 10685.790 - 10738.429: 77.8646% ( 140) 00:10:01.771 10738.429 - 10791.068: 78.6290% ( 91) 00:10:01.771 10791.068 - 10843.708: 79.3599% ( 87) 00:10:01.771 10843.708 - 10896.347: 80.0823% ( 86) 00:10:01.771 10896.347 - 10948.986: 80.6956% ( 73) 00:10:01.771 10948.986 - 11001.626: 81.1912% ( 59) 00:10:01.771 11001.626 - 11054.265: 81.5944% ( 48) 00:10:01.771 11054.265 - 11106.904: 81.9388% ( 41) 00:10:01.771 11106.904 - 11159.544: 82.3421% ( 48) 00:10:01.771 11159.544 - 11212.183: 82.6529% ( 37) 00:10:01.771 11212.183 - 11264.822: 82.9217% ( 32) 00:10:01.771 11264.822 - 11317.462: 83.1149% ( 23) 00:10:01.771 11317.462 - 11370.101: 83.2913% ( 21) 00:10:01.771 11370.101 - 11422.741: 83.4509% ( 19) 00:10:01.771 11422.741 - 11475.380: 83.5265% ( 9) 00:10:01.771 11475.380 - 11528.019: 83.6274% ( 12) 00:10:01.771 11528.019 - 11580.659: 83.8710% ( 29) 00:10:01.771 11580.659 - 11633.298: 83.9634% ( 11) 00:10:01.771 11633.298 - 11685.937: 84.1566% ( 23) 00:10:01.771 11685.937 - 11738.577: 84.3666% ( 25) 00:10:01.771 11738.577 - 11791.216: 84.5262% ( 19) 00:10:01.771 11791.216 - 11843.855: 84.7278% ( 24) 00:10:01.771 11843.855 - 11896.495: 84.8622% ( 16) 00:10:01.771 11896.495 - 11949.134: 85.0554% ( 23) 00:10:01.771 11949.134 - 12001.773: 85.4419% ( 46) 00:10:01.771 12001.773 - 12054.413: 85.7611% ( 38) 00:10:01.771 12054.413 - 12107.052: 86.0803% ( 38) 00:10:01.771 12107.052 - 12159.692: 86.5255% ( 53) 00:10:01.771 12159.692 - 12212.331: 86.6683% ( 17) 00:10:01.771 12212.331 - 12264.970: 86.8112% ( 17) 00:10:01.771 12264.970 - 12317.610: 87.0044% ( 23) 00:10:01.771 12317.610 - 12370.249: 87.1808% ( 21) 00:10:01.771 12370.249 - 12422.888: 87.3236% ( 17) 00:10:01.771 12422.888 - 12475.528: 87.4496% ( 15) 00:10:01.771 12475.528 - 12528.167: 87.6176% ( 20) 00:10:01.771 12528.167 - 12580.806: 87.7520% ( 16) 00:10:01.771 12580.806 - 12633.446: 87.9032% ( 18) 00:10:01.771 12633.446 - 12686.085: 88.0544% ( 18) 00:10:01.771 12686.085 - 12738.724: 88.2140% ( 19) 00:10:01.771 12738.724 - 12791.364: 88.3821% ( 20) 00:10:01.771 12791.364 - 12844.003: 88.4997% ( 14) 00:10:01.771 12844.003 - 12896.643: 88.6761% ( 21) 00:10:01.771 12896.643 - 12949.282: 88.8441% ( 20) 00:10:01.771 12949.282 - 13001.921: 88.9197% ( 9) 00:10:01.771 13001.921 - 13054.561: 88.9953% ( 9) 00:10:01.771 13054.561 - 13107.200: 89.1297% ( 16) 00:10:01.771 13107.200 - 13159.839: 89.2725% ( 17) 00:10:01.771 13159.839 - 13212.479: 89.3733% ( 12) 00:10:01.771 13212.479 - 13265.118: 89.4741% ( 12) 00:10:01.772 13265.118 - 13317.757: 89.6001% ( 15) 00:10:01.772 13317.757 - 13370.397: 89.6505% ( 6) 00:10:01.772 13370.397 - 13423.036: 89.6673% ( 2) 00:10:01.772 13423.036 - 13475.676: 89.7009% ( 4) 00:10:01.772 13475.676 - 13580.954: 89.8269% ( 15) 00:10:01.772 13580.954 - 13686.233: 90.0034% ( 21) 00:10:01.772 13686.233 - 13791.512: 90.2722% ( 32) 00:10:01.772 13791.512 - 13896.790: 90.5326% ( 31) 00:10:01.772 13896.790 - 14002.069: 90.7930% ( 31) 00:10:01.772 14002.069 - 14107.348: 91.1794% ( 46) 00:10:01.772 14107.348 - 14212.627: 91.5743% ( 47) 00:10:01.772 14212.627 - 14317.905: 91.8599% ( 34) 00:10:01.772 14317.905 - 14423.184: 92.1371% ( 33) 00:10:01.772 14423.184 - 14528.463: 92.4479% ( 37) 00:10:01.772 14528.463 - 14633.741: 92.6663% ( 26) 00:10:01.772 14633.741 - 14739.020: 92.8343% ( 20) 00:10:01.772 14739.020 - 14844.299: 93.1200% ( 34) 00:10:01.772 14844.299 - 14949.578: 93.4980% ( 45) 00:10:01.772 14949.578 - 15054.856: 93.8340% ( 40) 00:10:01.772 15054.856 - 15160.135: 93.9600% ( 15) 00:10:01.772 15160.135 - 15265.414: 94.1112% ( 18) 00:10:01.772 15265.414 - 15370.692: 94.3884% ( 33) 00:10:01.772 15370.692 - 15475.971: 94.5060% ( 14) 00:10:01.772 15475.971 - 15581.250: 94.5901% ( 10) 00:10:01.772 15581.250 - 15686.529: 94.6909% ( 12) 00:10:01.772 15686.529 - 15791.807: 94.9093% ( 26) 00:10:01.772 15791.807 - 15897.086: 95.2789% ( 44) 00:10:01.772 15897.086 - 16002.365: 95.9509% ( 80) 00:10:01.772 16002.365 - 16107.643: 96.3290% ( 45) 00:10:01.772 16107.643 - 16212.922: 96.5306% ( 24) 00:10:01.772 16212.922 - 16318.201: 96.6314% ( 12) 00:10:01.772 16318.201 - 16423.480: 96.6902% ( 7) 00:10:01.772 16423.480 - 16528.758: 96.7910% ( 12) 00:10:01.772 16528.758 - 16634.037: 96.8750% ( 10) 00:10:01.772 16634.037 - 16739.316: 96.9842% ( 13) 00:10:01.772 16739.316 - 16844.594: 97.2446% ( 31) 00:10:01.772 16844.594 - 16949.873: 97.4042% ( 19) 00:10:01.772 16949.873 - 17055.152: 97.4546% ( 6) 00:10:01.772 17055.152 - 17160.431: 97.4798% ( 3) 00:10:01.772 17160.431 - 17265.709: 97.5134% ( 4) 00:10:01.772 17265.709 - 17370.988: 97.5554% ( 5) 00:10:01.772 17370.988 - 17476.267: 97.5890% ( 4) 00:10:01.772 17476.267 - 17581.545: 97.6647% ( 9) 00:10:01.772 17581.545 - 17686.824: 97.8075% ( 17) 00:10:01.772 17686.824 - 17792.103: 97.9839% ( 21) 00:10:01.772 17792.103 - 17897.382: 98.1435% ( 19) 00:10:01.772 17897.382 - 18002.660: 98.2695% ( 15) 00:10:01.772 18002.660 - 18107.939: 98.3955% ( 15) 00:10:01.772 18107.939 - 18213.218: 98.5131% ( 14) 00:10:01.772 18213.218 - 18318.496: 98.6055% ( 11) 00:10:01.772 18318.496 - 18423.775: 98.6979% ( 11) 00:10:01.772 18423.775 - 18529.054: 98.7903% ( 11) 00:10:01.772 18529.054 - 18634.333: 98.8491% ( 7) 00:10:01.772 18634.333 - 18739.611: 98.8995% ( 6) 00:10:01.772 18739.611 - 18844.890: 98.9247% ( 3) 00:10:01.772 30109.712 - 30320.270: 98.9583% ( 4) 00:10:01.772 30320.270 - 30530.827: 99.0003% ( 5) 00:10:01.772 30530.827 - 30741.385: 99.0423% ( 5) 00:10:01.772 30741.385 - 30951.942: 99.0843% ( 5) 00:10:01.772 30951.942 - 31162.500: 99.1347% ( 6) 00:10:01.772 31162.500 - 31373.057: 99.1767% ( 5) 00:10:01.772 31373.057 - 31583.614: 99.2188% ( 5) 00:10:01.772 31583.614 - 31794.172: 99.2692% ( 6) 00:10:01.772 31794.172 - 32004.729: 99.3196% ( 6) 00:10:01.772 32004.729 - 32215.287: 99.3616% ( 5) 00:10:01.772 32215.287 - 32425.844: 99.4120% ( 6) 00:10:01.772 32425.844 - 32636.402: 99.4540% ( 5) 00:10:01.772 32636.402 - 32846.959: 99.4624% ( 1) 00:10:01.772 40005.912 - 40216.469: 99.4960% ( 4) 00:10:01.772 40216.469 - 40427.027: 99.5380% ( 5) 00:10:01.772 40427.027 - 40637.584: 99.5716% ( 4) 00:10:01.772 40637.584 - 40848.141: 99.6136% ( 5) 00:10:01.772 40848.141 - 41058.699: 99.6640% ( 6) 00:10:01.772 41058.699 - 41269.256: 99.7060% ( 5) 00:10:01.772 41269.256 - 41479.814: 99.7480% ( 5) 00:10:01.772 41479.814 - 41690.371: 99.7900% ( 5) 00:10:01.772 41690.371 - 41900.929: 99.8404% ( 6) 00:10:01.772 41900.929 - 42111.486: 99.8824% ( 5) 00:10:01.772 42111.486 - 42322.043: 99.9328% ( 6) 00:10:01.772 42322.043 - 42532.601: 99.9748% ( 5) 00:10:01.772 42532.601 - 42743.158: 100.0000% ( 3) 00:10:01.772 00:10:01.772 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:10:01.772 ============================================================================== 00:10:01.772 Range in us Cumulative IO count 00:10:01.772 8369.658 - 8422.297: 0.0252% ( 3) 00:10:01.772 8422.297 - 8474.937: 0.0672% ( 5) 00:10:01.772 8474.937 - 8527.576: 0.1260% ( 7) 00:10:01.772 8527.576 - 8580.215: 0.2016% ( 9) 00:10:01.772 8580.215 - 8632.855: 0.3276% ( 15) 00:10:01.772 8632.855 - 8685.494: 0.4536% ( 15) 00:10:01.772 8685.494 - 8738.133: 0.6300% ( 21) 00:10:01.772 8738.133 - 8790.773: 0.8401% ( 25) 00:10:01.772 8790.773 - 8843.412: 1.1929% ( 42) 00:10:01.772 8843.412 - 8896.051: 1.7977% ( 72) 00:10:01.772 8896.051 - 8948.691: 2.4866% ( 82) 00:10:01.772 8948.691 - 9001.330: 3.3854% ( 107) 00:10:01.772 9001.330 - 9053.969: 4.5615% ( 140) 00:10:01.772 9053.969 - 9106.609: 5.9224% ( 162) 00:10:01.772 9106.609 - 9159.248: 7.6529% ( 206) 00:10:01.772 9159.248 - 9211.888: 9.5430% ( 225) 00:10:01.772 9211.888 - 9264.527: 12.0380% ( 297) 00:10:01.772 9264.527 - 9317.166: 14.5665% ( 301) 00:10:01.772 9317.166 - 9369.806: 17.4899% ( 348) 00:10:01.772 9369.806 - 9422.445: 21.0517% ( 424) 00:10:01.772 9422.445 - 9475.084: 24.3364% ( 391) 00:10:01.772 9475.084 - 9527.724: 27.6378% ( 393) 00:10:01.772 9527.724 - 9580.363: 30.9392% ( 393) 00:10:01.772 9580.363 - 9633.002: 34.1146% ( 378) 00:10:01.772 9633.002 - 9685.642: 38.1132% ( 476) 00:10:01.772 9685.642 - 9738.281: 41.3054% ( 380) 00:10:01.772 9738.281 - 9790.920: 45.3041% ( 476) 00:10:01.772 9790.920 - 9843.560: 48.8743% ( 425) 00:10:01.772 9843.560 - 9896.199: 52.0917% ( 383) 00:10:01.772 9896.199 - 9948.839: 55.1579% ( 365) 00:10:01.772 9948.839 - 10001.478: 57.9805% ( 336) 00:10:01.772 10001.478 - 10054.117: 60.2823% ( 274) 00:10:01.772 10054.117 - 10106.757: 62.5504% ( 270) 00:10:01.772 10106.757 - 10159.396: 64.3313% ( 212) 00:10:01.772 10159.396 - 10212.035: 65.8854% ( 185) 00:10:01.772 10212.035 - 10264.675: 67.4143% ( 182) 00:10:01.772 10264.675 - 10317.314: 68.6828% ( 151) 00:10:01.772 10317.314 - 10369.953: 70.0185% ( 159) 00:10:01.772 10369.953 - 10422.593: 71.4298% ( 168) 00:10:01.772 10422.593 - 10475.232: 72.7571% ( 158) 00:10:01.772 10475.232 - 10527.871: 74.2272% ( 175) 00:10:01.772 10527.871 - 10580.511: 75.5460% ( 157) 00:10:01.772 10580.511 - 10633.150: 76.4701% ( 110) 00:10:01.772 10633.150 - 10685.790: 77.4278% ( 114) 00:10:01.772 10685.790 - 10738.429: 78.2846% ( 102) 00:10:01.772 10738.429 - 10791.068: 78.9483% ( 79) 00:10:01.772 10791.068 - 10843.708: 79.4523% ( 60) 00:10:01.772 10843.708 - 10896.347: 80.1159% ( 79) 00:10:01.772 10896.347 - 10948.986: 80.6116% ( 59) 00:10:01.772 10948.986 - 11001.626: 80.9224% ( 37) 00:10:01.772 11001.626 - 11054.265: 81.2164% ( 35) 00:10:01.772 11054.265 - 11106.904: 81.4264% ( 25) 00:10:01.772 11106.904 - 11159.544: 81.7540% ( 39) 00:10:01.772 11159.544 - 11212.183: 81.9724% ( 26) 00:10:01.772 11212.183 - 11264.822: 82.1237% ( 18) 00:10:01.772 11264.822 - 11317.462: 82.2581% ( 16) 00:10:01.772 11317.462 - 11370.101: 82.4849% ( 27) 00:10:01.772 11370.101 - 11422.741: 82.6277% ( 17) 00:10:01.772 11422.741 - 11475.380: 82.7789% ( 18) 00:10:01.772 11475.380 - 11528.019: 83.0225% ( 29) 00:10:01.772 11528.019 - 11580.659: 83.2409% ( 26) 00:10:01.773 11580.659 - 11633.298: 83.3837% ( 17) 00:10:01.773 11633.298 - 11685.937: 83.5097% ( 15) 00:10:01.773 11685.937 - 11738.577: 83.6358% ( 15) 00:10:01.773 11738.577 - 11791.216: 84.0138% ( 45) 00:10:01.773 11791.216 - 11843.855: 84.2574% ( 29) 00:10:01.773 11843.855 - 11896.495: 84.4338% ( 21) 00:10:01.773 11896.495 - 11949.134: 84.6018% ( 20) 00:10:01.773 11949.134 - 12001.773: 84.7950% ( 23) 00:10:01.773 12001.773 - 12054.413: 85.0554% ( 31) 00:10:01.773 12054.413 - 12107.052: 85.4839% ( 51) 00:10:01.773 12107.052 - 12159.692: 85.7443% ( 31) 00:10:01.773 12159.692 - 12212.331: 86.0551% ( 37) 00:10:01.773 12212.331 - 12264.970: 86.4415% ( 46) 00:10:01.773 12264.970 - 12317.610: 86.6767% ( 28) 00:10:01.773 12317.610 - 12370.249: 86.8868% ( 25) 00:10:01.773 12370.249 - 12422.888: 87.2228% ( 40) 00:10:01.773 12422.888 - 12475.528: 87.3656% ( 17) 00:10:01.773 12475.528 - 12528.167: 87.4916% ( 15) 00:10:01.773 12528.167 - 12580.806: 87.6092% ( 14) 00:10:01.773 12580.806 - 12633.446: 87.7184% ( 13) 00:10:01.773 12633.446 - 12686.085: 87.9872% ( 32) 00:10:01.773 12686.085 - 12738.724: 88.2140% ( 27) 00:10:01.773 12738.724 - 12791.364: 88.3065% ( 11) 00:10:01.773 12791.364 - 12844.003: 88.4745% ( 20) 00:10:01.773 12844.003 - 12896.643: 88.6509% ( 21) 00:10:01.773 12896.643 - 12949.282: 88.8525% ( 24) 00:10:01.773 12949.282 - 13001.921: 89.1045% ( 30) 00:10:01.773 13001.921 - 13054.561: 89.2473% ( 17) 00:10:01.773 13054.561 - 13107.200: 89.3229% ( 9) 00:10:01.773 13107.200 - 13159.839: 89.4489% ( 15) 00:10:01.773 13159.839 - 13212.479: 89.4993% ( 6) 00:10:01.773 13212.479 - 13265.118: 89.5833% ( 10) 00:10:01.773 13265.118 - 13317.757: 89.6841% ( 12) 00:10:01.773 13317.757 - 13370.397: 89.7849% ( 12) 00:10:01.773 13370.397 - 13423.036: 89.8690% ( 10) 00:10:01.773 13423.036 - 13475.676: 90.0034% ( 16) 00:10:01.773 13475.676 - 13580.954: 90.2554% ( 30) 00:10:01.773 13580.954 - 13686.233: 90.5494% ( 35) 00:10:01.773 13686.233 - 13791.512: 90.9526% ( 48) 00:10:01.773 13791.512 - 13896.790: 91.3306% ( 45) 00:10:01.773 13896.790 - 14002.069: 91.6667% ( 40) 00:10:01.773 14002.069 - 14107.348: 91.9859% ( 38) 00:10:01.773 14107.348 - 14212.627: 92.2799% ( 35) 00:10:01.773 14212.627 - 14317.905: 92.5739% ( 35) 00:10:01.773 14317.905 - 14423.184: 92.8007% ( 27) 00:10:01.773 14423.184 - 14528.463: 92.9688% ( 20) 00:10:01.773 14528.463 - 14633.741: 93.2544% ( 34) 00:10:01.773 14633.741 - 14739.020: 93.4896% ( 28) 00:10:01.773 14739.020 - 14844.299: 93.5568% ( 8) 00:10:01.773 14844.299 - 14949.578: 93.6240% ( 8) 00:10:01.773 14949.578 - 15054.856: 93.6912% ( 8) 00:10:01.773 15054.856 - 15160.135: 93.7836% ( 11) 00:10:01.773 15160.135 - 15265.414: 93.9348% ( 18) 00:10:01.773 15265.414 - 15370.692: 94.0692% ( 16) 00:10:01.773 15370.692 - 15475.971: 94.3884% ( 38) 00:10:01.773 15475.971 - 15581.250: 94.7833% ( 47) 00:10:01.773 15581.250 - 15686.529: 94.9681% ( 22) 00:10:01.773 15686.529 - 15791.807: 95.2873% ( 38) 00:10:01.773 15791.807 - 15897.086: 95.4469% ( 19) 00:10:01.773 15897.086 - 16002.365: 95.5393% ( 11) 00:10:01.773 16002.365 - 16107.643: 95.6317% ( 11) 00:10:01.773 16107.643 - 16212.922: 95.7997% ( 20) 00:10:01.773 16212.922 - 16318.201: 95.8921% ( 11) 00:10:01.773 16318.201 - 16423.480: 95.9845% ( 11) 00:10:01.773 16423.480 - 16528.758: 96.2534% ( 32) 00:10:01.773 16528.758 - 16634.037: 96.5810% ( 39) 00:10:01.773 16634.037 - 16739.316: 96.9002% ( 38) 00:10:01.773 16739.316 - 16844.594: 97.1438% ( 29) 00:10:01.773 16844.594 - 16949.873: 97.3370% ( 23) 00:10:01.773 16949.873 - 17055.152: 97.4210% ( 10) 00:10:01.773 17055.152 - 17160.431: 97.5302% ( 13) 00:10:01.773 17160.431 - 17265.709: 97.7655% ( 28) 00:10:01.773 17265.709 - 17370.988: 98.0595% ( 35) 00:10:01.773 17370.988 - 17476.267: 98.3367% ( 33) 00:10:01.773 17476.267 - 17581.545: 98.5299% ( 23) 00:10:01.773 17581.545 - 17686.824: 98.6559% ( 15) 00:10:01.773 17686.824 - 17792.103: 98.7315% ( 9) 00:10:01.773 17792.103 - 17897.382: 98.8155% ( 10) 00:10:01.773 17897.382 - 18002.660: 98.8491% ( 4) 00:10:01.773 18002.660 - 18107.939: 98.8911% ( 5) 00:10:01.773 18107.939 - 18213.218: 98.9247% ( 4) 00:10:01.773 28846.368 - 29056.925: 98.9583% ( 4) 00:10:01.773 29056.925 - 29267.483: 99.0087% ( 6) 00:10:01.773 29267.483 - 29478.040: 99.0591% ( 6) 00:10:01.773 29478.040 - 29688.598: 99.1011% ( 5) 00:10:01.773 29688.598 - 29899.155: 99.1515% ( 6) 00:10:01.773 29899.155 - 30109.712: 99.1935% ( 5) 00:10:01.773 30109.712 - 30320.270: 99.2440% ( 6) 00:10:01.773 30320.270 - 30530.827: 99.2944% ( 6) 00:10:01.773 30530.827 - 30741.385: 99.3280% ( 4) 00:10:01.773 30741.385 - 30951.942: 99.3784% ( 6) 00:10:01.773 30951.942 - 31162.500: 99.4120% ( 4) 00:10:01.773 31162.500 - 31373.057: 99.4624% ( 6) 00:10:01.773 38742.567 - 38953.124: 99.4960% ( 4) 00:10:01.773 38953.124 - 39163.682: 99.5464% ( 6) 00:10:01.773 39163.682 - 39374.239: 99.5884% ( 5) 00:10:01.773 39374.239 - 39584.797: 99.6304% ( 5) 00:10:01.773 39584.797 - 39795.354: 99.6640% ( 4) 00:10:01.773 39795.354 - 40005.912: 99.7060% ( 5) 00:10:01.773 40005.912 - 40216.469: 99.7564% ( 6) 00:10:01.773 40216.469 - 40427.027: 99.7984% ( 5) 00:10:01.773 40427.027 - 40637.584: 99.8488% ( 6) 00:10:01.773 40637.584 - 40848.141: 99.8908% ( 5) 00:10:01.773 40848.141 - 41058.699: 99.9412% ( 6) 00:10:01.773 41058.699 - 41269.256: 99.9832% ( 5) 00:10:01.773 41269.256 - 41479.814: 100.0000% ( 2) 00:10:01.773 00:10:01.773 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:10:01.773 ============================================================================== 00:10:01.773 Range in us Cumulative IO count 00:10:01.773 8159.100 - 8211.740: 0.0084% ( 1) 00:10:01.773 8369.658 - 8422.297: 0.0420% ( 4) 00:10:01.773 8422.297 - 8474.937: 0.1092% ( 8) 00:10:01.773 8474.937 - 8527.576: 0.1932% ( 10) 00:10:01.773 8527.576 - 8580.215: 0.3108% ( 14) 00:10:01.773 8580.215 - 8632.855: 0.4452% ( 16) 00:10:01.773 8632.855 - 8685.494: 0.6216% ( 21) 00:10:01.773 8685.494 - 8738.133: 0.8485% ( 27) 00:10:01.773 8738.133 - 8790.773: 1.1173% ( 32) 00:10:01.773 8790.773 - 8843.412: 1.7725% ( 78) 00:10:01.773 8843.412 - 8896.051: 2.1001% ( 39) 00:10:01.773 8896.051 - 8948.691: 2.7218% ( 74) 00:10:01.773 8948.691 - 9001.330: 3.4190% ( 83) 00:10:01.773 9001.330 - 9053.969: 4.2759% ( 102) 00:10:01.773 9053.969 - 9106.609: 5.6872% ( 168) 00:10:01.773 9106.609 - 9159.248: 7.7033% ( 240) 00:10:01.773 9159.248 - 9211.888: 9.6942% ( 237) 00:10:01.773 9211.888 - 9264.527: 12.2648% ( 306) 00:10:01.773 9264.527 - 9317.166: 14.9614% ( 321) 00:10:01.773 9317.166 - 9369.806: 17.7251% ( 329) 00:10:01.773 9369.806 - 9422.445: 20.6989% ( 354) 00:10:01.773 9422.445 - 9475.084: 23.8659% ( 377) 00:10:01.773 9475.084 - 9527.724: 27.3774% ( 418) 00:10:01.773 9527.724 - 9580.363: 30.7712% ( 404) 00:10:01.773 9580.363 - 9633.002: 33.9634% ( 380) 00:10:01.773 9633.002 - 9685.642: 37.1556% ( 380) 00:10:01.773 9685.642 - 9738.281: 40.6082% ( 411) 00:10:01.773 9738.281 - 9790.920: 44.7245% ( 490) 00:10:01.773 9790.920 - 9843.560: 48.6223% ( 464) 00:10:01.773 9843.560 - 9896.199: 52.6966% ( 485) 00:10:01.773 9896.199 - 9948.839: 56.0484% ( 399) 00:10:01.773 9948.839 - 10001.478: 58.7534% ( 322) 00:10:01.773 10001.478 - 10054.117: 60.9879% ( 266) 00:10:01.773 10054.117 - 10106.757: 62.9116% ( 229) 00:10:01.773 10106.757 - 10159.396: 64.6757% ( 210) 00:10:01.773 10159.396 - 10212.035: 66.6331% ( 233) 00:10:01.773 10212.035 - 10264.675: 68.0192% ( 165) 00:10:01.773 10264.675 - 10317.314: 69.4556% ( 171) 00:10:01.773 10317.314 - 10369.953: 70.6233% ( 139) 00:10:01.773 10369.953 - 10422.593: 71.8834% ( 150) 00:10:01.773 10422.593 - 10475.232: 73.2359% ( 161) 00:10:01.773 10475.232 - 10527.871: 74.0675% ( 99) 00:10:01.773 10527.871 - 10580.511: 74.8908% ( 98) 00:10:01.773 10580.511 - 10633.150: 75.7560% ( 103) 00:10:01.774 10633.150 - 10685.790: 76.6213% ( 103) 00:10:01.774 10685.790 - 10738.429: 77.2093% ( 70) 00:10:01.774 10738.429 - 10791.068: 78.1082% ( 107) 00:10:01.774 10791.068 - 10843.708: 78.7130% ( 72) 00:10:01.774 10843.708 - 10896.347: 79.4355% ( 86) 00:10:01.774 10896.347 - 10948.986: 80.0319% ( 71) 00:10:01.774 10948.986 - 11001.626: 80.5948% ( 67) 00:10:01.774 11001.626 - 11054.265: 81.1156% ( 62) 00:10:01.774 11054.265 - 11106.904: 81.5356% ( 50) 00:10:01.774 11106.904 - 11159.544: 81.8044% ( 32) 00:10:01.774 11159.544 - 11212.183: 81.9640% ( 19) 00:10:01.774 11212.183 - 11264.822: 82.0817% ( 14) 00:10:01.774 11264.822 - 11317.462: 82.2077% ( 15) 00:10:01.774 11317.462 - 11370.101: 82.3253% ( 14) 00:10:01.774 11370.101 - 11422.741: 82.4177% ( 11) 00:10:01.774 11422.741 - 11475.380: 82.4933% ( 9) 00:10:01.774 11475.380 - 11528.019: 82.6109% ( 14) 00:10:01.774 11528.019 - 11580.659: 82.7705% ( 19) 00:10:01.774 11580.659 - 11633.298: 83.0057% ( 28) 00:10:01.774 11633.298 - 11685.937: 83.3753% ( 44) 00:10:01.774 11685.937 - 11738.577: 83.7282% ( 42) 00:10:01.774 11738.577 - 11791.216: 83.9802% ( 30) 00:10:01.774 11791.216 - 11843.855: 84.2826% ( 36) 00:10:01.774 11843.855 - 11896.495: 84.7866% ( 60) 00:10:01.774 11896.495 - 11949.134: 84.9294% ( 17) 00:10:01.774 11949.134 - 12001.773: 85.0386% ( 13) 00:10:01.774 12001.773 - 12054.413: 85.1394% ( 12) 00:10:01.774 12054.413 - 12107.052: 85.2235% ( 10) 00:10:01.774 12107.052 - 12159.692: 85.3243% ( 12) 00:10:01.774 12159.692 - 12212.331: 85.4587% ( 16) 00:10:01.774 12212.331 - 12264.970: 85.6099% ( 18) 00:10:01.774 12264.970 - 12317.610: 85.9039% ( 35) 00:10:01.774 12317.610 - 12370.249: 86.1055% ( 24) 00:10:01.774 12370.249 - 12422.888: 86.2315% ( 15) 00:10:01.774 12422.888 - 12475.528: 86.5255% ( 35) 00:10:01.774 12475.528 - 12528.167: 86.8952% ( 44) 00:10:01.774 12528.167 - 12580.806: 87.1892% ( 35) 00:10:01.774 12580.806 - 12633.446: 87.3908% ( 24) 00:10:01.774 12633.446 - 12686.085: 87.5504% ( 19) 00:10:01.774 12686.085 - 12738.724: 87.6512% ( 12) 00:10:01.774 12738.724 - 12791.364: 87.8024% ( 18) 00:10:01.774 12791.364 - 12844.003: 87.9788% ( 21) 00:10:01.774 12844.003 - 12896.643: 88.1888% ( 25) 00:10:01.774 12896.643 - 12949.282: 88.3401% ( 18) 00:10:01.774 12949.282 - 13001.921: 88.6089% ( 32) 00:10:01.774 13001.921 - 13054.561: 88.8525% ( 29) 00:10:01.774 13054.561 - 13107.200: 89.2053% ( 42) 00:10:01.774 13107.200 - 13159.839: 89.5581% ( 42) 00:10:01.774 13159.839 - 13212.479: 89.8101% ( 30) 00:10:01.774 13212.479 - 13265.118: 90.0622% ( 30) 00:10:01.774 13265.118 - 13317.757: 90.3226% ( 31) 00:10:01.774 13317.757 - 13370.397: 90.5242% ( 24) 00:10:01.774 13370.397 - 13423.036: 90.7006% ( 21) 00:10:01.774 13423.036 - 13475.676: 90.9274% ( 27) 00:10:01.774 13475.676 - 13580.954: 91.2382% ( 37) 00:10:01.774 13580.954 - 13686.233: 91.3978% ( 19) 00:10:01.774 13686.233 - 13791.512: 91.4987% ( 12) 00:10:01.774 13791.512 - 13896.790: 91.5995% ( 12) 00:10:01.774 13896.790 - 14002.069: 91.6751% ( 9) 00:10:01.774 14002.069 - 14107.348: 91.7423% ( 8) 00:10:01.774 14107.348 - 14212.627: 91.8431% ( 12) 00:10:01.774 14212.627 - 14317.905: 92.0783% ( 28) 00:10:01.774 14317.905 - 14423.184: 92.4059% ( 39) 00:10:01.774 14423.184 - 14528.463: 92.8427% ( 52) 00:10:01.774 14528.463 - 14633.741: 93.0780% ( 28) 00:10:01.774 14633.741 - 14739.020: 93.2880% ( 25) 00:10:01.774 14739.020 - 14844.299: 93.3720% ( 10) 00:10:01.774 14844.299 - 14949.578: 93.4644% ( 11) 00:10:01.774 14949.578 - 15054.856: 93.5820% ( 14) 00:10:01.774 15054.856 - 15160.135: 93.8004% ( 26) 00:10:01.774 15160.135 - 15265.414: 94.1784% ( 45) 00:10:01.774 15265.414 - 15370.692: 94.5649% ( 46) 00:10:01.774 15370.692 - 15475.971: 94.8001% ( 28) 00:10:01.774 15475.971 - 15581.250: 94.9177% ( 14) 00:10:01.774 15581.250 - 15686.529: 95.1025% ( 22) 00:10:01.774 15686.529 - 15791.807: 95.3377% ( 28) 00:10:01.774 15791.807 - 15897.086: 95.5393% ( 24) 00:10:01.774 15897.086 - 16002.365: 95.7997% ( 31) 00:10:01.774 16002.365 - 16107.643: 96.0097% ( 25) 00:10:01.774 16107.643 - 16212.922: 96.1610% ( 18) 00:10:01.774 16212.922 - 16318.201: 96.3122% ( 18) 00:10:01.774 16318.201 - 16423.480: 96.4130% ( 12) 00:10:01.774 16423.480 - 16528.758: 96.4970% ( 10) 00:10:01.774 16528.758 - 16634.037: 96.6566% ( 19) 00:10:01.774 16634.037 - 16739.316: 96.8834% ( 27) 00:10:01.774 16739.316 - 16844.594: 97.1858% ( 36) 00:10:01.774 16844.594 - 16949.873: 97.3790% ( 23) 00:10:01.774 16949.873 - 17055.152: 97.5638% ( 22) 00:10:01.774 17055.152 - 17160.431: 97.8495% ( 34) 00:10:01.774 17160.431 - 17265.709: 98.0427% ( 23) 00:10:01.774 17265.709 - 17370.988: 98.2527% ( 25) 00:10:01.774 17370.988 - 17476.267: 98.3955% ( 17) 00:10:01.774 17476.267 - 17581.545: 98.7483% ( 42) 00:10:01.774 17581.545 - 17686.824: 98.8911% ( 17) 00:10:01.774 17686.824 - 17792.103: 98.9247% ( 4) 00:10:01.774 27583.023 - 27793.581: 98.9499% ( 3) 00:10:01.774 27793.581 - 28004.138: 98.9919% ( 5) 00:10:01.774 28004.138 - 28214.696: 99.0339% ( 5) 00:10:01.774 28214.696 - 28425.253: 99.0843% ( 6) 00:10:01.774 28425.253 - 28635.810: 99.1263% ( 5) 00:10:01.774 28635.810 - 28846.368: 99.1767% ( 6) 00:10:01.774 28846.368 - 29056.925: 99.2188% ( 5) 00:10:01.774 29056.925 - 29267.483: 99.2608% ( 5) 00:10:01.774 29267.483 - 29478.040: 99.3028% ( 5) 00:10:01.774 29478.040 - 29688.598: 99.3532% ( 6) 00:10:01.774 29688.598 - 29899.155: 99.4036% ( 6) 00:10:01.774 29899.155 - 30109.712: 99.4540% ( 6) 00:10:01.774 30109.712 - 30320.270: 99.4624% ( 1) 00:10:01.774 37479.222 - 37689.780: 99.4876% ( 3) 00:10:01.774 37689.780 - 37900.337: 99.5296% ( 5) 00:10:01.774 37900.337 - 38110.895: 99.5800% ( 6) 00:10:01.774 38110.895 - 38321.452: 99.6220% ( 5) 00:10:01.774 38321.452 - 38532.010: 99.6640% ( 5) 00:10:01.774 38532.010 - 38742.567: 99.7144% ( 6) 00:10:01.774 38742.567 - 38953.124: 99.7564% ( 5) 00:10:01.774 38953.124 - 39163.682: 99.8068% ( 6) 00:10:01.774 39163.682 - 39374.239: 99.8572% ( 6) 00:10:01.775 39374.239 - 39584.797: 99.9160% ( 7) 00:10:01.775 39584.797 - 39795.354: 99.9664% ( 6) 00:10:01.775 39795.354 - 40005.912: 100.0000% ( 4) 00:10:01.775 00:10:01.775 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:10:01.775 ============================================================================== 00:10:01.775 Range in us Cumulative IO count 00:10:01.775 8422.297 - 8474.937: 0.0668% ( 8) 00:10:01.775 8474.937 - 8527.576: 0.1337% ( 8) 00:10:01.775 8527.576 - 8580.215: 0.2089% ( 9) 00:10:01.775 8580.215 - 8632.855: 0.3509% ( 17) 00:10:01.775 8632.855 - 8685.494: 0.4345% ( 10) 00:10:01.775 8685.494 - 8738.133: 0.6016% ( 20) 00:10:01.775 8738.133 - 8790.773: 0.8690% ( 32) 00:10:01.775 8790.773 - 8843.412: 1.3035% ( 52) 00:10:01.775 8843.412 - 8896.051: 2.0053% ( 84) 00:10:01.775 8896.051 - 8948.691: 2.7406% ( 88) 00:10:01.775 8948.691 - 9001.330: 3.5094% ( 92) 00:10:01.775 9001.330 - 9053.969: 4.6039% ( 131) 00:10:01.775 9053.969 - 9106.609: 6.0912% ( 178) 00:10:01.775 9106.609 - 9159.248: 7.5953% ( 180) 00:10:01.775 9159.248 - 9211.888: 9.7259% ( 255) 00:10:01.775 9211.888 - 9264.527: 11.7731% ( 245) 00:10:01.775 9264.527 - 9317.166: 14.5555% ( 333) 00:10:01.775 9317.166 - 9369.806: 17.1959% ( 316) 00:10:01.775 9369.806 - 9422.445: 20.8807% ( 441) 00:10:01.775 9422.445 - 9475.084: 24.7410% ( 462) 00:10:01.775 9475.084 - 9527.724: 28.2587% ( 421) 00:10:01.775 9527.724 - 9580.363: 31.9268% ( 439) 00:10:01.775 9580.363 - 9633.002: 35.7119% ( 453) 00:10:01.775 9633.002 - 9685.642: 38.9706% ( 390) 00:10:01.775 9685.642 - 9738.281: 42.3546% ( 405) 00:10:01.775 9738.281 - 9790.920: 45.6801% ( 398) 00:10:01.775 9790.920 - 9843.560: 48.9221% ( 388) 00:10:01.775 9843.560 - 9896.199: 52.0638% ( 376) 00:10:01.775 9896.199 - 9948.839: 55.4813% ( 409) 00:10:01.775 9948.839 - 10001.478: 58.1217% ( 316) 00:10:01.775 10001.478 - 10054.117: 60.3025% ( 261) 00:10:01.775 10054.117 - 10106.757: 62.4833% ( 261) 00:10:01.775 10106.757 - 10159.396: 64.3717% ( 226) 00:10:01.775 10159.396 - 10212.035: 65.9007% ( 183) 00:10:01.775 10212.035 - 10264.675: 67.0956% ( 143) 00:10:01.775 10264.675 - 10317.314: 68.3740% ( 153) 00:10:01.775 10317.314 - 10369.953: 69.2848% ( 109) 00:10:01.775 10369.953 - 10422.593: 70.2039% ( 110) 00:10:01.775 10422.593 - 10475.232: 71.3570% ( 138) 00:10:01.775 10475.232 - 10527.871: 72.6270% ( 152) 00:10:01.775 10527.871 - 10580.511: 73.6798% ( 126) 00:10:01.775 10580.511 - 10633.150: 74.8914% ( 145) 00:10:01.775 10633.150 - 10685.790: 75.9693% ( 129) 00:10:01.775 10685.790 - 10738.429: 76.8717% ( 108) 00:10:01.775 10738.429 - 10791.068: 77.4733% ( 72) 00:10:01.775 10791.068 - 10843.708: 78.1918% ( 86) 00:10:01.775 10843.708 - 10896.347: 78.8269% ( 76) 00:10:01.775 10896.347 - 10948.986: 79.5789% ( 90) 00:10:01.775 10948.986 - 11001.626: 80.2306% ( 78) 00:10:01.775 11001.626 - 11054.265: 80.5314% ( 36) 00:10:01.775 11054.265 - 11106.904: 80.8239% ( 35) 00:10:01.775 11106.904 - 11159.544: 81.0662% ( 29) 00:10:01.775 11159.544 - 11212.183: 81.2834% ( 26) 00:10:01.775 11212.183 - 11264.822: 81.5257% ( 29) 00:10:01.775 11264.822 - 11317.462: 81.7848% ( 31) 00:10:01.775 11317.462 - 11370.101: 82.1357% ( 42) 00:10:01.775 11370.101 - 11422.741: 82.5201% ( 46) 00:10:01.775 11422.741 - 11475.380: 82.8626% ( 41) 00:10:01.775 11475.380 - 11528.019: 83.0632% ( 24) 00:10:01.775 11528.019 - 11580.659: 83.2721% ( 25) 00:10:01.775 11580.659 - 11633.298: 83.5227% ( 30) 00:10:01.775 11633.298 - 11685.937: 83.7483% ( 27) 00:10:01.775 11685.937 - 11738.577: 84.0241% ( 33) 00:10:01.775 11738.577 - 11791.216: 84.1578% ( 16) 00:10:01.775 11791.216 - 11843.855: 84.2914% ( 16) 00:10:01.775 11843.855 - 11896.495: 84.4251% ( 16) 00:10:01.775 11896.495 - 11949.134: 84.5421% ( 14) 00:10:01.775 11949.134 - 12001.773: 84.6424% ( 12) 00:10:01.775 12001.773 - 12054.413: 84.7426% ( 12) 00:10:01.775 12054.413 - 12107.052: 84.8930% ( 18) 00:10:01.775 12107.052 - 12159.692: 85.0769% ( 22) 00:10:01.775 12159.692 - 12212.331: 85.2523% ( 21) 00:10:01.775 12212.331 - 12264.970: 85.5197% ( 32) 00:10:01.775 12264.970 - 12317.610: 85.7787% ( 31) 00:10:01.775 12317.610 - 12370.249: 85.9291% ( 18) 00:10:01.775 12370.249 - 12422.888: 86.0795% ( 18) 00:10:01.775 12422.888 - 12475.528: 86.3219% ( 29) 00:10:01.775 12475.528 - 12528.167: 86.5809% ( 31) 00:10:01.775 12528.167 - 12580.806: 86.8483% ( 32) 00:10:01.775 12580.806 - 12633.446: 86.9318% ( 10) 00:10:01.775 12633.446 - 12686.085: 87.0237% ( 11) 00:10:01.775 12686.085 - 12738.724: 87.0989% ( 9) 00:10:01.775 12738.724 - 12791.364: 87.1825% ( 10) 00:10:01.775 12791.364 - 12844.003: 87.2744% ( 11) 00:10:01.775 12844.003 - 12896.643: 87.3663% ( 11) 00:10:01.775 12896.643 - 12949.282: 87.4749% ( 13) 00:10:01.775 12949.282 - 13001.921: 87.5585% ( 10) 00:10:01.775 13001.921 - 13054.561: 87.6755% ( 14) 00:10:01.775 13054.561 - 13107.200: 87.8092% ( 16) 00:10:01.775 13107.200 - 13159.839: 87.9763% ( 20) 00:10:01.775 13159.839 - 13212.479: 88.2102% ( 28) 00:10:01.775 13212.479 - 13265.118: 88.3773% ( 20) 00:10:01.775 13265.118 - 13317.757: 88.5194% ( 17) 00:10:01.775 13317.757 - 13370.397: 88.6865% ( 20) 00:10:01.775 13370.397 - 13423.036: 88.8453% ( 19) 00:10:01.775 13423.036 - 13475.676: 88.9957% ( 18) 00:10:01.775 13475.676 - 13580.954: 89.4385% ( 53) 00:10:01.775 13580.954 - 13686.233: 89.6808% ( 29) 00:10:01.775 13686.233 - 13791.512: 90.0401% ( 43) 00:10:01.775 13791.512 - 13896.790: 90.6584% ( 74) 00:10:01.775 13896.790 - 14002.069: 91.1347% ( 57) 00:10:01.775 14002.069 - 14107.348: 91.6945% ( 67) 00:10:01.775 14107.348 - 14212.627: 91.8700% ( 21) 00:10:01.775 14212.627 - 14317.905: 91.9201% ( 6) 00:10:01.775 14317.905 - 14423.184: 91.9452% ( 3) 00:10:01.775 14423.184 - 14528.463: 91.9619% ( 2) 00:10:01.775 14528.463 - 14633.741: 92.0789% ( 14) 00:10:01.775 14633.741 - 14739.020: 92.2460% ( 20) 00:10:01.775 14739.020 - 14844.299: 92.5719% ( 39) 00:10:01.775 14844.299 - 14949.578: 93.0732% ( 60) 00:10:01.775 14949.578 - 15054.856: 93.2320% ( 19) 00:10:01.775 15054.856 - 15160.135: 93.4826% ( 30) 00:10:01.775 15160.135 - 15265.414: 93.7834% ( 36) 00:10:01.775 15265.414 - 15370.692: 94.1678% ( 46) 00:10:01.775 15370.692 - 15475.971: 94.4602% ( 35) 00:10:01.775 15475.971 - 15581.250: 94.7193% ( 31) 00:10:01.775 15581.250 - 15686.529: 94.9866% ( 32) 00:10:01.775 15686.529 - 15791.807: 95.2707% ( 34) 00:10:01.775 15791.807 - 15897.086: 95.5966% ( 39) 00:10:01.775 15897.086 - 16002.365: 95.8305% ( 28) 00:10:01.775 16002.365 - 16107.643: 96.0645% ( 28) 00:10:01.775 16107.643 - 16212.922: 96.3235% ( 31) 00:10:01.775 16212.922 - 16318.201: 96.5241% ( 24) 00:10:01.775 16318.201 - 16423.480: 96.7914% ( 32) 00:10:01.775 16423.480 - 16528.758: 97.2176% ( 51) 00:10:01.775 16528.758 - 16634.037: 97.4265% ( 25) 00:10:01.775 16634.037 - 16739.316: 97.6437% ( 26) 00:10:01.775 16739.316 - 16844.594: 97.7356% ( 11) 00:10:01.775 16844.594 - 16949.873: 97.8275% ( 11) 00:10:01.775 16949.873 - 17055.152: 97.8610% ( 4) 00:10:01.775 17055.152 - 17160.431: 97.8693% ( 1) 00:10:01.775 17265.709 - 17370.988: 97.9362% ( 8) 00:10:01.775 17370.988 - 17476.267: 98.0364% ( 12) 00:10:01.775 17476.267 - 17581.545: 98.2537% ( 26) 00:10:01.775 17581.545 - 17686.824: 98.4459% ( 23) 00:10:01.775 17686.824 - 17792.103: 98.7299% ( 34) 00:10:01.775 17792.103 - 17897.382: 98.8469% ( 14) 00:10:01.776 17897.382 - 18002.660: 98.9221% ( 9) 00:10:01.776 18002.660 - 18107.939: 98.9305% ( 1) 00:10:01.776 18634.333 - 18739.611: 98.9388% ( 1) 00:10:01.776 18739.611 - 18844.890: 98.9723% ( 4) 00:10:01.776 18844.890 - 18950.169: 99.0057% ( 4) 00:10:01.776 18950.169 - 19055.447: 99.0391% ( 4) 00:10:01.776 19055.447 - 19160.726: 99.0725% ( 4) 00:10:01.776 19160.726 - 19266.005: 99.1143% ( 5) 00:10:01.776 19266.005 - 19371.284: 99.1477% ( 4) 00:10:01.776 19371.284 - 19476.562: 99.1811% ( 4) 00:10:01.776 19476.562 - 19581.841: 99.2146% ( 4) 00:10:01.776 19581.841 - 19687.120: 99.2480% ( 4) 00:10:01.776 19687.120 - 19792.398: 99.2814% ( 4) 00:10:01.776 19792.398 - 19897.677: 99.3148% ( 4) 00:10:01.776 19897.677 - 20002.956: 99.3566% ( 5) 00:10:01.776 20002.956 - 20108.235: 99.3900% ( 4) 00:10:01.776 20108.235 - 20213.513: 99.4235% ( 4) 00:10:01.776 20213.513 - 20318.792: 99.4652% ( 5) 00:10:01.776 27793.581 - 28004.138: 99.5070% ( 5) 00:10:01.776 28004.138 - 28214.696: 99.5572% ( 6) 00:10:01.776 28214.696 - 28425.253: 99.6073% ( 6) 00:10:01.776 28425.253 - 28635.810: 99.6491% ( 5) 00:10:01.776 28635.810 - 28846.368: 99.6908% ( 5) 00:10:01.776 28846.368 - 29056.925: 99.7410% ( 6) 00:10:01.776 29056.925 - 29267.483: 99.7911% ( 6) 00:10:01.776 29267.483 - 29478.040: 99.8412% ( 6) 00:10:01.776 29478.040 - 29688.598: 99.8830% ( 5) 00:10:01.776 29688.598 - 29899.155: 99.9332% ( 6) 00:10:01.776 29899.155 - 30109.712: 99.9749% ( 5) 00:10:01.776 30109.712 - 30320.270: 100.0000% ( 3) 00:10:01.776 00:10:01.776 03:54:44 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:10:01.776 00:10:01.776 real 0m2.689s 00:10:01.776 user 0m2.282s 00:10:01.776 sys 0m0.301s 00:10:01.776 03:54:44 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.776 03:54:44 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:10:01.776 ************************************ 00:10:01.776 END TEST nvme_perf 00:10:01.776 ************************************ 00:10:01.776 03:54:44 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:01.776 03:54:44 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.776 03:54:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.776 03:54:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:01.776 ************************************ 00:10:01.776 START TEST nvme_hello_world 00:10:01.776 ************************************ 00:10:01.776 03:54:44 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:10:02.036 Initializing NVMe Controllers 00:10:02.036 Attached to 0000:00:10.0 00:10:02.036 Namespace ID: 1 size: 6GB 00:10:02.036 Attached to 0000:00:11.0 00:10:02.036 Namespace ID: 1 size: 5GB 00:10:02.036 Attached to 0000:00:13.0 00:10:02.036 Namespace ID: 1 size: 1GB 00:10:02.036 Attached to 0000:00:12.0 00:10:02.036 Namespace ID: 1 size: 4GB 00:10:02.036 Namespace ID: 2 size: 4GB 00:10:02.036 Namespace ID: 3 size: 4GB 00:10:02.036 Initialization complete. 00:10:02.036 INFO: using host memory buffer for IO 00:10:02.036 Hello world! 00:10:02.036 INFO: using host memory buffer for IO 00:10:02.036 Hello world! 00:10:02.036 INFO: using host memory buffer for IO 00:10:02.036 Hello world! 00:10:02.036 INFO: using host memory buffer for IO 00:10:02.036 Hello world! 00:10:02.036 INFO: using host memory buffer for IO 00:10:02.036 Hello world! 00:10:02.036 INFO: using host memory buffer for IO 00:10:02.036 Hello world! 00:10:02.036 00:10:02.036 real 0m0.296s 00:10:02.036 user 0m0.112s 00:10:02.036 sys 0m0.144s 00:10:02.036 03:54:44 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.036 03:54:44 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:10:02.036 ************************************ 00:10:02.036 END TEST nvme_hello_world 00:10:02.036 ************************************ 00:10:02.036 03:54:44 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:02.036 03:54:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.036 03:54:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.036 03:54:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.036 ************************************ 00:10:02.036 START TEST nvme_sgl 00:10:02.036 ************************************ 00:10:02.036 03:54:44 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:10:02.296 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:10:02.296 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:10:02.296 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:10:02.296 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:10:02.296 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:10:02.296 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:10:02.296 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:10:02.296 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:10:02.296 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:10:02.296 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:10:02.296 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:10:02.296 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:10:02.296 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:10:02.296 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:10:02.296 NVMe Readv/Writev Request test 00:10:02.296 Attached to 0000:00:10.0 00:10:02.296 Attached to 0000:00:11.0 00:10:02.296 Attached to 0000:00:13.0 00:10:02.296 Attached to 0000:00:12.0 00:10:02.296 0000:00:10.0: build_io_request_2 test passed 00:10:02.296 0000:00:10.0: build_io_request_4 test passed 00:10:02.296 0000:00:10.0: build_io_request_5 test passed 00:10:02.296 0000:00:10.0: build_io_request_6 test passed 00:10:02.296 0000:00:10.0: build_io_request_7 test passed 00:10:02.296 0000:00:10.0: build_io_request_10 test passed 00:10:02.296 0000:00:11.0: build_io_request_2 test passed 00:10:02.296 0000:00:11.0: build_io_request_4 test passed 00:10:02.296 0000:00:11.0: build_io_request_5 test passed 00:10:02.296 0000:00:11.0: build_io_request_6 test passed 00:10:02.296 0000:00:11.0: build_io_request_7 test passed 00:10:02.296 0000:00:11.0: build_io_request_10 test passed 00:10:02.296 Cleaning up... 00:10:02.296 00:10:02.296 real 0m0.354s 00:10:02.296 user 0m0.170s 00:10:02.296 sys 0m0.143s 00:10:02.296 03:54:44 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.296 03:54:44 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:10:02.296 ************************************ 00:10:02.296 END TEST nvme_sgl 00:10:02.296 ************************************ 00:10:02.296 03:54:45 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:02.296 03:54:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.296 03:54:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.296 03:54:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.560 ************************************ 00:10:02.560 START TEST nvme_e2edp 00:10:02.560 ************************************ 00:10:02.560 03:54:45 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:10:02.818 NVMe Write/Read with End-to-End data protection test 00:10:02.818 Attached to 0000:00:10.0 00:10:02.818 Attached to 0000:00:11.0 00:10:02.818 Attached to 0000:00:13.0 00:10:02.818 Attached to 0000:00:12.0 00:10:02.818 Cleaning up... 00:10:02.818 00:10:02.818 real 0m0.306s 00:10:02.818 user 0m0.099s 00:10:02.818 sys 0m0.158s 00:10:02.818 03:54:45 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.818 03:54:45 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:10:02.818 ************************************ 00:10:02.818 END TEST nvme_e2edp 00:10:02.818 ************************************ 00:10:02.818 03:54:45 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:02.818 03:54:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.818 03:54:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.818 03:54:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.818 ************************************ 00:10:02.818 START TEST nvme_reserve 00:10:02.818 ************************************ 00:10:02.818 03:54:45 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:10:03.076 ===================================================== 00:10:03.076 NVMe Controller at PCI bus 0, device 16, function 0 00:10:03.076 ===================================================== 00:10:03.076 Reservations: Not Supported 00:10:03.076 ===================================================== 00:10:03.076 NVMe Controller at PCI bus 0, device 17, function 0 00:10:03.076 ===================================================== 00:10:03.076 Reservations: Not Supported 00:10:03.076 ===================================================== 00:10:03.076 NVMe Controller at PCI bus 0, device 19, function 0 00:10:03.076 ===================================================== 00:10:03.076 Reservations: Not Supported 00:10:03.076 ===================================================== 00:10:03.076 NVMe Controller at PCI bus 0, device 18, function 0 00:10:03.076 ===================================================== 00:10:03.076 Reservations: Not Supported 00:10:03.076 Reservation test passed 00:10:03.076 00:10:03.076 real 0m0.274s 00:10:03.076 user 0m0.093s 00:10:03.076 sys 0m0.139s 00:10:03.076 03:54:45 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.076 03:54:45 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 ************************************ 00:10:03.076 END TEST nvme_reserve 00:10:03.076 ************************************ 00:10:03.076 03:54:45 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:03.076 03:54:45 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:03.076 03:54:45 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.076 03:54:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.076 ************************************ 00:10:03.076 START TEST nvme_err_injection 00:10:03.076 ************************************ 00:10:03.076 03:54:45 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:10:03.335 NVMe Error Injection test 00:10:03.335 Attached to 0000:00:10.0 00:10:03.335 Attached to 0000:00:11.0 00:10:03.335 Attached to 0000:00:13.0 00:10:03.335 Attached to 0000:00:12.0 00:10:03.335 0000:00:12.0: get features failed as expected 00:10:03.335 0000:00:10.0: get features failed as expected 00:10:03.335 0000:00:11.0: get features failed as expected 00:10:03.335 0000:00:13.0: get features failed as expected 00:10:03.335 0000:00:11.0: get features successfully as expected 00:10:03.335 0000:00:13.0: get features successfully as expected 00:10:03.335 0000:00:12.0: get features successfully as expected 00:10:03.335 0000:00:10.0: get features successfully as expected 00:10:03.335 0000:00:11.0: read failed as expected 00:10:03.335 0000:00:13.0: read failed as expected 00:10:03.335 0000:00:12.0: read failed as expected 00:10:03.335 0000:00:10.0: read failed as expected 00:10:03.335 0000:00:11.0: read successfully as expected 00:10:03.335 0000:00:10.0: read successfully as expected 00:10:03.335 0000:00:13.0: read successfully as expected 00:10:03.335 0000:00:12.0: read successfully as expected 00:10:03.335 Cleaning up... 00:10:03.335 00:10:03.335 real 0m0.296s 00:10:03.335 user 0m0.093s 00:10:03.335 sys 0m0.161s 00:10:03.335 03:54:46 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:03.335 03:54:46 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:10:03.335 ************************************ 00:10:03.335 END TEST nvme_err_injection 00:10:03.335 ************************************ 00:10:03.594 03:54:46 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:03.594 03:54:46 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:10:03.594 03:54:46 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:03.594 03:54:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.594 ************************************ 00:10:03.594 START TEST nvme_overhead 00:10:03.594 ************************************ 00:10:03.594 03:54:46 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:10:04.972 Initializing NVMe Controllers 00:10:04.972 Attached to 0000:00:10.0 00:10:04.972 Attached to 0000:00:11.0 00:10:04.972 Attached to 0000:00:13.0 00:10:04.972 Attached to 0000:00:12.0 00:10:04.972 Initialization complete. Launching workers. 00:10:04.972 submit (in ns) avg, min, max = 13273.3, 11566.3, 102841.8 00:10:04.972 complete (in ns) avg, min, max = 8876.0, 7788.8, 47520.5 00:10:04.972 00:10:04.972 Submit histogram 00:10:04.972 ================ 00:10:04.972 Range in us Cumulative Count 00:10:04.972 11.566 - 11.618: 0.0358% ( 2) 00:10:04.972 11.618 - 11.669: 0.0537% ( 1) 00:10:04.972 11.669 - 11.720: 0.1612% ( 6) 00:10:04.972 11.720 - 11.772: 0.3583% ( 11) 00:10:04.972 11.772 - 11.823: 0.5912% ( 13) 00:10:04.972 11.823 - 11.875: 1.0032% ( 23) 00:10:04.972 11.875 - 11.926: 1.7915% ( 44) 00:10:04.972 11.926 - 11.978: 2.5618% ( 43) 00:10:04.972 11.978 - 12.029: 3.4217% ( 48) 00:10:04.972 12.029 - 12.080: 4.6041% ( 66) 00:10:04.972 12.080 - 12.132: 5.7327% ( 63) 00:10:04.972 12.132 - 12.183: 6.7001% ( 54) 00:10:04.972 12.183 - 12.235: 7.8466% ( 64) 00:10:04.973 12.235 - 12.286: 9.3873% ( 86) 00:10:04.973 12.286 - 12.337: 11.1071% ( 96) 00:10:04.973 12.337 - 12.389: 13.7764% ( 149) 00:10:04.973 12.389 - 12.440: 17.1444% ( 188) 00:10:04.973 12.440 - 12.492: 21.0498% ( 218) 00:10:04.973 12.492 - 12.543: 25.6718% ( 258) 00:10:04.973 12.543 - 12.594: 30.2221% ( 254) 00:10:04.973 12.594 - 12.646: 35.4174% ( 290) 00:10:04.973 12.646 - 12.697: 40.8635% ( 304) 00:10:04.973 12.697 - 12.749: 46.6320% ( 322) 00:10:04.973 12.749 - 12.800: 51.9706% ( 298) 00:10:04.973 12.800 - 12.851: 57.9183% ( 332) 00:10:04.973 12.851 - 12.903: 63.1494% ( 292) 00:10:04.973 12.903 - 12.954: 68.5776% ( 303) 00:10:04.973 12.954 - 13.006: 72.9129% ( 242) 00:10:04.973 13.006 - 13.057: 76.8183% ( 218) 00:10:04.973 13.057 - 13.108: 80.2042% ( 189) 00:10:04.973 13.108 - 13.160: 83.1243% ( 163) 00:10:04.973 13.160 - 13.263: 87.0297% ( 218) 00:10:04.973 13.263 - 13.365: 89.6095% ( 144) 00:10:04.973 13.365 - 13.468: 90.7023% ( 61) 00:10:04.973 13.468 - 13.571: 91.1680% ( 26) 00:10:04.973 13.571 - 13.674: 91.3651% ( 11) 00:10:04.973 13.674 - 13.777: 91.5801% ( 12) 00:10:04.973 13.777 - 13.880: 91.6159% ( 2) 00:10:04.973 13.982 - 14.085: 91.6338% ( 1) 00:10:04.973 14.085 - 14.188: 91.6697% ( 2) 00:10:04.973 14.188 - 14.291: 91.7234% ( 3) 00:10:04.973 14.394 - 14.496: 91.7951% ( 4) 00:10:04.973 14.496 - 14.599: 91.8309% ( 2) 00:10:04.973 14.599 - 14.702: 91.8488% ( 1) 00:10:04.973 14.702 - 14.805: 91.8667% ( 1) 00:10:04.973 14.805 - 14.908: 91.9205% ( 3) 00:10:04.973 14.908 - 15.010: 92.0100% ( 5) 00:10:04.973 15.010 - 15.113: 92.0459% ( 2) 00:10:04.973 15.113 - 15.216: 92.0996% ( 3) 00:10:04.973 15.216 - 15.319: 92.1892% ( 5) 00:10:04.973 15.319 - 15.422: 92.2967% ( 6) 00:10:04.973 15.422 - 15.524: 92.3683% ( 4) 00:10:04.973 15.524 - 15.627: 92.5475% ( 10) 00:10:04.973 15.627 - 15.730: 92.6012% ( 3) 00:10:04.973 15.730 - 15.833: 92.6729% ( 4) 00:10:04.973 15.833 - 15.936: 92.7445% ( 4) 00:10:04.973 15.936 - 16.039: 92.7804% ( 2) 00:10:04.973 16.039 - 16.141: 92.8341% ( 3) 00:10:04.973 16.141 - 16.244: 92.8699% ( 2) 00:10:04.973 16.244 - 16.347: 92.9058% ( 2) 00:10:04.973 16.347 - 16.450: 92.9237% ( 1) 00:10:04.973 16.450 - 16.553: 92.9774% ( 3) 00:10:04.973 16.553 - 16.655: 93.0670% ( 5) 00:10:04.973 16.655 - 16.758: 93.1566% ( 5) 00:10:04.973 16.758 - 16.861: 93.2820% ( 7) 00:10:04.973 16.861 - 16.964: 93.4432% ( 9) 00:10:04.973 16.964 - 17.067: 93.6044% ( 9) 00:10:04.973 17.067 - 17.169: 93.8373% ( 13) 00:10:04.973 17.169 - 17.272: 94.0523% ( 12) 00:10:04.973 17.272 - 17.375: 94.4823% ( 24) 00:10:04.973 17.375 - 17.478: 94.6972% ( 12) 00:10:04.973 17.478 - 17.581: 95.0735% ( 21) 00:10:04.973 17.581 - 17.684: 95.3780% ( 17) 00:10:04.973 17.684 - 17.786: 95.6646% ( 16) 00:10:04.973 17.786 - 17.889: 95.7900% ( 7) 00:10:04.973 17.889 - 17.992: 96.0229% ( 13) 00:10:04.973 17.992 - 18.095: 96.2917% ( 15) 00:10:04.973 18.095 - 18.198: 96.3454% ( 3) 00:10:04.973 18.198 - 18.300: 96.4529% ( 6) 00:10:04.973 18.300 - 18.403: 96.5783% ( 7) 00:10:04.973 18.403 - 18.506: 96.7037% ( 7) 00:10:04.973 18.506 - 18.609: 96.9724% ( 15) 00:10:04.973 18.609 - 18.712: 97.1336% ( 9) 00:10:04.973 18.712 - 18.814: 97.2411% ( 6) 00:10:04.973 18.814 - 18.917: 97.3665% ( 7) 00:10:04.973 18.917 - 19.020: 97.4561% ( 5) 00:10:04.973 19.020 - 19.123: 97.5278% ( 4) 00:10:04.973 19.123 - 19.226: 97.5994% ( 4) 00:10:04.973 19.226 - 19.329: 97.6532% ( 3) 00:10:04.973 19.329 - 19.431: 97.6711% ( 1) 00:10:04.973 19.431 - 19.534: 97.7427% ( 4) 00:10:04.973 19.534 - 19.637: 97.7786% ( 2) 00:10:04.973 19.637 - 19.740: 97.7965% ( 1) 00:10:04.973 19.740 - 19.843: 97.8323% ( 2) 00:10:04.973 19.843 - 19.945: 97.8681% ( 2) 00:10:04.973 19.945 - 20.048: 97.9040% ( 2) 00:10:04.973 20.048 - 20.151: 97.9219% ( 1) 00:10:04.973 20.151 - 20.254: 97.9936% ( 4) 00:10:04.973 20.254 - 20.357: 98.0473% ( 3) 00:10:04.973 20.357 - 20.459: 98.1190% ( 4) 00:10:04.973 20.459 - 20.562: 98.1727% ( 3) 00:10:04.973 20.562 - 20.665: 98.2264% ( 3) 00:10:04.973 20.665 - 20.768: 98.2623% ( 2) 00:10:04.973 20.768 - 20.871: 98.3160% ( 3) 00:10:04.973 20.973 - 21.076: 98.3339% ( 1) 00:10:04.973 21.076 - 21.179: 98.3877% ( 3) 00:10:04.973 21.385 - 21.488: 98.4056% ( 1) 00:10:04.973 21.488 - 21.590: 98.4235% ( 1) 00:10:04.973 22.002 - 22.104: 98.4414% ( 1) 00:10:04.973 22.104 - 22.207: 98.4593% ( 1) 00:10:04.973 22.413 - 22.516: 98.4952% ( 2) 00:10:04.973 22.516 - 22.618: 98.5847% ( 5) 00:10:04.973 22.618 - 22.721: 98.6385% ( 3) 00:10:04.973 22.721 - 22.824: 98.6922% ( 3) 00:10:04.973 22.824 - 22.927: 98.8355% ( 8) 00:10:04.973 22.927 - 23.030: 98.8893% ( 3) 00:10:04.973 23.030 - 23.133: 98.9789% ( 5) 00:10:04.973 23.133 - 23.235: 99.1222% ( 8) 00:10:04.973 23.235 - 23.338: 99.3192% ( 11) 00:10:04.973 23.338 - 23.441: 99.3551% ( 2) 00:10:04.973 23.441 - 23.544: 99.4088% ( 3) 00:10:04.973 23.544 - 23.647: 99.4626% ( 3) 00:10:04.973 23.647 - 23.749: 99.4805% ( 1) 00:10:04.973 23.749 - 23.852: 99.5342% ( 3) 00:10:04.973 23.852 - 23.955: 99.5521% ( 1) 00:10:04.973 24.058 - 24.161: 99.6059% ( 3) 00:10:04.973 24.366 - 24.469: 99.6238% ( 1) 00:10:04.973 25.189 - 25.292: 99.6417% ( 1) 00:10:04.973 27.965 - 28.170: 99.6775% ( 2) 00:10:04.973 28.376 - 28.582: 99.7313% ( 3) 00:10:04.973 28.582 - 28.787: 99.7492% ( 1) 00:10:04.973 28.787 - 28.993: 99.7850% ( 2) 00:10:04.973 29.198 - 29.404: 99.8388% ( 3) 00:10:04.973 29.404 - 29.610: 99.8567% ( 1) 00:10:04.973 29.610 - 29.815: 99.8746% ( 1) 00:10:04.973 32.694 - 32.900: 99.8925% ( 1) 00:10:04.973 36.395 - 36.601: 99.9104% ( 1) 00:10:04.973 50.378 - 50.583: 99.9283% ( 1) 00:10:04.973 52.639 - 53.051: 99.9463% ( 1) 00:10:04.973 53.462 - 53.873: 99.9642% ( 1) 00:10:04.973 54.284 - 54.696: 99.9821% ( 1) 00:10:04.973 102.811 - 103.222: 100.0000% ( 1) 00:10:04.973 00:10:04.973 Complete histogram 00:10:04.973 ================== 00:10:04.973 Range in us Cumulative Count 00:10:04.973 7.762 - 7.814: 0.0358% ( 2) 00:10:04.973 7.814 - 7.865: 0.0537% ( 1) 00:10:04.973 7.865 - 7.916: 0.2329% ( 10) 00:10:04.973 7.916 - 7.968: 1.3078% ( 60) 00:10:04.973 7.968 - 8.019: 2.4543% ( 64) 00:10:04.973 8.019 - 8.071: 3.6546% ( 67) 00:10:04.973 8.071 - 8.122: 5.2669% ( 90) 00:10:04.973 8.122 - 8.173: 7.1659% ( 106) 00:10:04.973 8.173 - 8.225: 9.5844% ( 135) 00:10:04.973 8.225 - 8.276: 11.4475% ( 104) 00:10:04.973 8.276 - 8.328: 16.6249% ( 289) 00:10:04.973 8.328 - 8.379: 23.9162% ( 407) 00:10:04.973 8.379 - 8.431: 29.4339% ( 308) 00:10:04.973 8.431 - 8.482: 37.4059% ( 445) 00:10:04.973 8.482 - 8.533: 45.8438% ( 471) 00:10:04.973 8.533 - 8.585: 54.6041% ( 489) 00:10:04.973 8.585 - 8.636: 62.7374% ( 454) 00:10:04.973 8.636 - 8.688: 70.4586% ( 431) 00:10:04.973 8.688 - 8.739: 76.7646% ( 352) 00:10:04.973 8.739 - 8.790: 81.8345% ( 283) 00:10:04.973 8.790 - 8.842: 86.2057% ( 244) 00:10:04.973 8.842 - 8.893: 88.8570% ( 148) 00:10:04.973 8.893 - 8.945: 90.6843% ( 102) 00:10:04.973 8.945 - 8.996: 92.3683% ( 94) 00:10:04.973 8.996 - 9.047: 93.2461% ( 49) 00:10:04.973 9.047 - 9.099: 93.9090% ( 37) 00:10:04.973 9.099 - 9.150: 94.3927% ( 27) 00:10:04.973 9.150 - 9.202: 94.5898% ( 11) 00:10:04.973 9.202 - 9.253: 94.8943% ( 17) 00:10:04.973 9.253 - 9.304: 95.0555% ( 9) 00:10:04.973 9.304 - 9.356: 95.1093% ( 3) 00:10:04.973 9.356 - 9.407: 95.1630% ( 3) 00:10:04.973 9.459 - 9.510: 95.1989% ( 2) 00:10:04.973 9.664 - 9.716: 95.2168% ( 1) 00:10:04.973 9.767 - 9.818: 95.2347% ( 1) 00:10:04.973 9.921 - 9.973: 95.2526% ( 1) 00:10:04.973 10.024 - 10.076: 95.2705% ( 1) 00:10:04.973 10.127 - 10.178: 95.2884% ( 1) 00:10:04.973 10.281 - 10.333: 95.3422% ( 3) 00:10:04.973 10.384 - 10.435: 95.3601% ( 1) 00:10:04.973 10.435 - 10.487: 95.3959% ( 2) 00:10:04.973 10.487 - 10.538: 95.4855% ( 5) 00:10:04.973 10.538 - 10.590: 95.5751% ( 5) 00:10:04.973 10.590 - 10.641: 95.6646% ( 5) 00:10:04.973 10.641 - 10.692: 95.7542% ( 5) 00:10:04.973 10.692 - 10.744: 95.8259% ( 4) 00:10:04.973 10.744 - 10.795: 95.8796% ( 3) 00:10:04.973 10.795 - 10.847: 95.8975% ( 1) 00:10:04.973 10.847 - 10.898: 95.9513% ( 3) 00:10:04.973 10.898 - 10.949: 96.0050% ( 3) 00:10:04.973 11.104 - 11.155: 96.0408% ( 2) 00:10:04.973 11.206 - 11.258: 96.0767% ( 2) 00:10:04.973 11.258 - 11.309: 96.0946% ( 1) 00:10:04.973 11.309 - 11.361: 96.1304% ( 2) 00:10:04.973 11.361 - 11.412: 96.1483% ( 1) 00:10:04.974 11.412 - 11.463: 96.1662% ( 1) 00:10:04.974 11.463 - 11.515: 96.2379% ( 4) 00:10:04.974 11.515 - 11.566: 96.2737% ( 2) 00:10:04.974 11.566 - 11.618: 96.2917% ( 1) 00:10:04.974 11.618 - 11.669: 96.3454% ( 3) 00:10:04.974 11.669 - 11.720: 96.3812% ( 2) 00:10:04.974 11.772 - 11.823: 96.3991% ( 1) 00:10:04.974 11.823 - 11.875: 96.4171% ( 1) 00:10:04.974 11.875 - 11.926: 96.4350% ( 1) 00:10:04.974 11.926 - 11.978: 96.4708% ( 2) 00:10:04.974 11.978 - 12.029: 96.4887% ( 1) 00:10:04.974 12.029 - 12.080: 96.5066% ( 1) 00:10:04.974 12.080 - 12.132: 96.5245% ( 1) 00:10:04.974 12.183 - 12.235: 96.5425% ( 1) 00:10:04.974 12.235 - 12.286: 96.5783% ( 2) 00:10:04.974 12.286 - 12.337: 96.5962% ( 1) 00:10:04.974 12.337 - 12.389: 96.6320% ( 2) 00:10:04.974 12.440 - 12.492: 96.6679% ( 2) 00:10:04.974 12.543 - 12.594: 96.6858% ( 1) 00:10:04.974 12.697 - 12.749: 96.7574% ( 4) 00:10:04.974 12.749 - 12.800: 96.8112% ( 3) 00:10:04.974 12.851 - 12.903: 96.8470% ( 2) 00:10:04.974 12.903 - 12.954: 96.8649% ( 1) 00:10:04.974 12.954 - 13.006: 96.9366% ( 4) 00:10:04.974 13.006 - 13.057: 96.9724% ( 2) 00:10:04.974 13.108 - 13.160: 96.9903% ( 1) 00:10:04.974 13.160 - 13.263: 97.0441% ( 3) 00:10:04.974 13.263 - 13.365: 97.0799% ( 2) 00:10:04.974 13.365 - 13.468: 97.1336% ( 3) 00:10:04.974 13.571 - 13.674: 97.1695% ( 2) 00:10:04.974 13.674 - 13.777: 97.2232% ( 3) 00:10:04.974 13.777 - 13.880: 97.2590% ( 2) 00:10:04.974 13.880 - 13.982: 97.2770% ( 1) 00:10:04.974 13.982 - 14.085: 97.3845% ( 6) 00:10:04.974 14.085 - 14.188: 97.4203% ( 2) 00:10:04.974 14.188 - 14.291: 97.5099% ( 5) 00:10:04.974 14.291 - 14.394: 97.6173% ( 6) 00:10:04.974 14.394 - 14.496: 97.6890% ( 4) 00:10:04.974 14.496 - 14.599: 97.7607% ( 4) 00:10:04.974 14.599 - 14.702: 97.7786% ( 1) 00:10:04.974 14.702 - 14.805: 97.8144% ( 2) 00:10:04.974 14.805 - 14.908: 97.8861% ( 4) 00:10:04.974 14.908 - 15.010: 97.9577% ( 4) 00:10:04.974 15.010 - 15.113: 97.9936% ( 2) 00:10:04.974 15.113 - 15.216: 98.0294% ( 2) 00:10:04.974 15.422 - 15.524: 98.0473% ( 1) 00:10:04.974 15.524 - 15.627: 98.0652% ( 1) 00:10:04.974 16.964 - 17.067: 98.0831% ( 1) 00:10:04.974 17.272 - 17.375: 98.1010% ( 1) 00:10:04.974 17.478 - 17.581: 98.1190% ( 1) 00:10:04.974 18.198 - 18.300: 98.1369% ( 1) 00:10:04.974 18.300 - 18.403: 98.1548% ( 1) 00:10:04.974 18.403 - 18.506: 98.1727% ( 1) 00:10:04.974 18.506 - 18.609: 98.2802% ( 6) 00:10:04.974 18.609 - 18.712: 98.3698% ( 5) 00:10:04.974 18.712 - 18.814: 98.4952% ( 7) 00:10:04.974 18.814 - 18.917: 98.6743% ( 10) 00:10:04.974 18.917 - 19.020: 98.8714% ( 11) 00:10:04.974 19.020 - 19.123: 99.0863% ( 12) 00:10:04.974 19.123 - 19.226: 99.2834% ( 11) 00:10:04.974 19.226 - 19.329: 99.3909% ( 6) 00:10:04.974 19.329 - 19.431: 99.4267% ( 2) 00:10:04.974 19.431 - 19.534: 99.5521% ( 7) 00:10:04.974 19.534 - 19.637: 99.5700% ( 1) 00:10:04.974 19.637 - 19.740: 99.5880% ( 1) 00:10:04.974 19.945 - 20.048: 99.6059% ( 1) 00:10:04.974 20.357 - 20.459: 99.6238% ( 1) 00:10:04.974 20.871 - 20.973: 99.6596% ( 2) 00:10:04.974 21.179 - 21.282: 99.6775% ( 1) 00:10:04.974 22.104 - 22.207: 99.6954% ( 1) 00:10:04.974 22.310 - 22.413: 99.7134% ( 1) 00:10:04.974 22.618 - 22.721: 99.7313% ( 1) 00:10:04.974 23.338 - 23.441: 99.7492% ( 1) 00:10:04.974 23.544 - 23.647: 99.7850% ( 2) 00:10:04.974 23.647 - 23.749: 99.8029% ( 1) 00:10:04.974 23.852 - 23.955: 99.8388% ( 2) 00:10:04.974 23.955 - 24.058: 99.8567% ( 1) 00:10:04.974 24.058 - 24.161: 99.8746% ( 1) 00:10:04.974 24.263 - 24.366: 99.8925% ( 1) 00:10:04.974 24.469 - 24.572: 99.9104% ( 1) 00:10:04.974 24.675 - 24.778: 99.9283% ( 1) 00:10:04.974 25.189 - 25.292: 99.9463% ( 1) 00:10:04.974 26.320 - 26.525: 99.9642% ( 1) 00:10:04.974 27.553 - 27.759: 99.9821% ( 1) 00:10:04.974 47.499 - 47.704: 100.0000% ( 1) 00:10:04.974 00:10:04.974 00:10:04.974 real 0m1.327s 00:10:04.974 user 0m1.103s 00:10:04.974 sys 0m0.161s 00:10:04.974 03:54:47 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.974 03:54:47 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:10:04.974 ************************************ 00:10:04.974 END TEST nvme_overhead 00:10:04.974 ************************************ 00:10:04.974 03:54:47 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:04.974 03:54:47 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:04.974 03:54:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.974 03:54:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:04.974 ************************************ 00:10:04.974 START TEST nvme_arbitration 00:10:04.974 ************************************ 00:10:04.974 03:54:47 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:10:08.260 Initializing NVMe Controllers 00:10:08.260 Attached to 0000:00:10.0 00:10:08.260 Attached to 0000:00:11.0 00:10:08.260 Attached to 0000:00:13.0 00:10:08.260 Attached to 0000:00:12.0 00:10:08.260 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:10:08.260 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:10:08.260 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:10:08.260 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:10:08.260 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:10:08.260 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:10:08.260 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:10:08.260 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:10:08.260 Initialization complete. Launching workers. 00:10:08.260 Starting thread on core 1 with urgent priority queue 00:10:08.260 Starting thread on core 2 with urgent priority queue 00:10:08.260 Starting thread on core 3 with urgent priority queue 00:10:08.260 Starting thread on core 0 with urgent priority queue 00:10:08.260 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:10:08.260 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:10:08.260 QEMU NVMe Ctrl (12341 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:10:08.260 QEMU NVMe Ctrl (12342 ) core 1: 597.33 IO/s 167.41 secs/100000 ios 00:10:08.260 QEMU NVMe Ctrl (12343 ) core 2: 490.67 IO/s 203.80 secs/100000 ios 00:10:08.260 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:10:08.260 ======================================================== 00:10:08.260 00:10:08.260 00:10:08.260 real 0m3.423s 00:10:08.260 user 0m9.348s 00:10:08.260 sys 0m0.190s 00:10:08.260 03:54:50 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.260 03:54:50 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:10:08.260 ************************************ 00:10:08.260 END TEST nvme_arbitration 00:10:08.260 ************************************ 00:10:08.260 03:54:50 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:08.260 03:54:50 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:08.260 03:54:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.260 03:54:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.260 ************************************ 00:10:08.260 START TEST nvme_single_aen 00:10:08.260 ************************************ 00:10:08.260 03:54:50 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:10:08.561 Asynchronous Event Request test 00:10:08.561 Attached to 0000:00:10.0 00:10:08.561 Attached to 0000:00:11.0 00:10:08.561 Attached to 0000:00:13.0 00:10:08.561 Attached to 0000:00:12.0 00:10:08.561 Reset controller to setup AER completions for this process 00:10:08.561 Registering asynchronous event callbacks... 00:10:08.561 Getting orig temperature thresholds of all controllers 00:10:08.561 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.561 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.561 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.561 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:08.561 Setting all controllers temperature threshold low to trigger AER 00:10:08.561 Waiting for all controllers temperature threshold to be set lower 00:10:08.561 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.561 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:08.561 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.561 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:08.561 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.561 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:08.561 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:08.561 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:08.561 Waiting for all controllers to trigger AER and reset threshold 00:10:08.562 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.562 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.562 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.562 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:08.562 Cleaning up... 00:10:08.841 00:10:08.841 real 0m0.300s 00:10:08.841 user 0m0.089s 00:10:08.841 sys 0m0.167s 00:10:08.841 03:54:51 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.841 03:54:51 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:10:08.841 ************************************ 00:10:08.841 END TEST nvme_single_aen 00:10:08.841 ************************************ 00:10:08.841 03:54:51 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:10:08.841 03:54:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.841 03:54:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.841 03:54:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:08.841 ************************************ 00:10:08.841 START TEST nvme_doorbell_aers 00:10:08.841 ************************************ 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:08.841 03:54:51 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:09.098 [2024-12-07 03:54:51.821476] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:19.077 Executing: test_write_invalid_db 00:10:19.077 Waiting for AER completion... 00:10:19.077 Failure: test_write_invalid_db 00:10:19.077 00:10:19.077 Executing: test_invalid_db_write_overflow_sq 00:10:19.077 Waiting for AER completion... 00:10:19.077 Failure: test_invalid_db_write_overflow_sq 00:10:19.077 00:10:19.077 Executing: test_invalid_db_write_overflow_cq 00:10:19.077 Waiting for AER completion... 00:10:19.077 Failure: test_invalid_db_write_overflow_cq 00:10:19.077 00:10:19.077 03:55:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:19.077 03:55:01 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:19.335 [2024-12-07 03:55:01.839476] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:29.310 Executing: test_write_invalid_db 00:10:29.310 Waiting for AER completion... 00:10:29.310 Failure: test_write_invalid_db 00:10:29.310 00:10:29.310 Executing: test_invalid_db_write_overflow_sq 00:10:29.310 Waiting for AER completion... 00:10:29.310 Failure: test_invalid_db_write_overflow_sq 00:10:29.310 00:10:29.310 Executing: test_invalid_db_write_overflow_cq 00:10:29.310 Waiting for AER completion... 00:10:29.310 Failure: test_invalid_db_write_overflow_cq 00:10:29.310 00:10:29.310 03:55:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:29.310 03:55:11 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:29.310 [2024-12-07 03:55:11.918318] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:39.292 Executing: test_write_invalid_db 00:10:39.292 Waiting for AER completion... 00:10:39.292 Failure: test_write_invalid_db 00:10:39.292 00:10:39.292 Executing: test_invalid_db_write_overflow_sq 00:10:39.292 Waiting for AER completion... 00:10:39.292 Failure: test_invalid_db_write_overflow_sq 00:10:39.292 00:10:39.292 Executing: test_invalid_db_write_overflow_cq 00:10:39.292 Waiting for AER completion... 00:10:39.292 Failure: test_invalid_db_write_overflow_cq 00:10:39.292 00:10:39.292 03:55:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:39.292 03:55:21 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:39.292 [2024-12-07 03:55:21.955027] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.270 Executing: test_write_invalid_db 00:10:49.270 Waiting for AER completion... 00:10:49.270 Failure: test_write_invalid_db 00:10:49.270 00:10:49.270 Executing: test_invalid_db_write_overflow_sq 00:10:49.270 Waiting for AER completion... 00:10:49.270 Failure: test_invalid_db_write_overflow_sq 00:10:49.270 00:10:49.270 Executing: test_invalid_db_write_overflow_cq 00:10:49.270 Waiting for AER completion... 00:10:49.270 Failure: test_invalid_db_write_overflow_cq 00:10:49.270 00:10:49.270 00:10:49.270 real 0m40.345s 00:10:49.270 user 0m28.308s 00:10:49.270 sys 0m11.628s 00:10:49.270 03:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.270 03:55:31 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:49.270 ************************************ 00:10:49.270 END TEST nvme_doorbell_aers 00:10:49.270 ************************************ 00:10:49.270 03:55:31 nvme -- nvme/nvme.sh@97 -- # uname 00:10:49.270 03:55:31 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:49.270 03:55:31 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:49.270 03:55:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:10:49.270 03:55:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.270 03:55:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.270 ************************************ 00:10:49.270 START TEST nvme_multi_aen 00:10:49.270 ************************************ 00:10:49.270 03:55:31 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:49.528 [2024-12-07 03:55:32.042286] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.042385] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.042418] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.044139] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.044333] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.044355] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.046037] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.046081] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.046096] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.047573] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.047758] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 [2024-12-07 03:55:32.047778] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64443) is not found. Dropping the request. 00:10:49.528 Child process pid: 64964 00:10:49.786 [Child] Asynchronous Event Request test 00:10:49.786 [Child] Attached to 0000:00:10.0 00:10:49.786 [Child] Attached to 0000:00:11.0 00:10:49.786 [Child] Attached to 0000:00:13.0 00:10:49.786 [Child] Attached to 0000:00:12.0 00:10:49.786 [Child] Registering asynchronous event callbacks... 00:10:49.786 [Child] Getting orig temperature thresholds of all controllers 00:10:49.786 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.786 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.786 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.786 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.786 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:49.786 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.786 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.786 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.786 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.786 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.786 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.786 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.786 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.786 [Child] Cleaning up... 00:10:49.786 Asynchronous Event Request test 00:10:49.786 Attached to 0000:00:10.0 00:10:49.786 Attached to 0000:00:11.0 00:10:49.787 Attached to 0000:00:13.0 00:10:49.787 Attached to 0000:00:12.0 00:10:49.787 Reset controller to setup AER completions for this process 00:10:49.787 Registering asynchronous event callbacks... 00:10:49.787 Getting orig temperature thresholds of all controllers 00:10:49.787 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.787 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.787 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.787 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:49.787 Setting all controllers temperature threshold low to trigger AER 00:10:49.787 Waiting for all controllers temperature threshold to be set lower 00:10:49.787 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.787 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:49.787 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.787 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:49.787 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.787 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:49.787 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:49.787 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:49.787 Waiting for all controllers to trigger AER and reset threshold 00:10:49.787 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.787 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.787 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.787 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:49.787 Cleaning up... 00:10:49.787 00:10:49.787 real 0m0.610s 00:10:49.787 user 0m0.207s 00:10:49.787 sys 0m0.294s 00:10:49.787 03:55:32 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:49.787 ************************************ 00:10:49.787 END TEST nvme_multi_aen 00:10:49.787 ************************************ 00:10:49.787 03:55:32 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:49.787 03:55:32 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:49.787 03:55:32 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:49.787 03:55:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:49.787 03:55:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:49.787 ************************************ 00:10:49.787 START TEST nvme_startup 00:10:49.787 ************************************ 00:10:49.787 03:55:32 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:50.046 Initializing NVMe Controllers 00:10:50.046 Attached to 0000:00:10.0 00:10:50.046 Attached to 0000:00:11.0 00:10:50.046 Attached to 0000:00:13.0 00:10:50.046 Attached to 0000:00:12.0 00:10:50.046 Initialization complete. 00:10:50.046 Time used:219298.625 (us). 00:10:50.306 00:10:50.306 real 0m0.319s 00:10:50.306 user 0m0.115s 00:10:50.306 sys 0m0.153s 00:10:50.306 ************************************ 00:10:50.306 END TEST nvme_startup 00:10:50.306 ************************************ 00:10:50.306 03:55:32 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.306 03:55:32 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:50.306 03:55:32 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:50.306 03:55:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:50.306 03:55:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.306 03:55:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:50.306 ************************************ 00:10:50.306 START TEST nvme_multi_secondary 00:10:50.306 ************************************ 00:10:50.306 03:55:32 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:10:50.306 03:55:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65020 00:10:50.306 03:55:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:50.306 03:55:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65021 00:10:50.306 03:55:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:50.306 03:55:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:53.594 Initializing NVMe Controllers 00:10:53.594 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:53.594 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:53.594 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:53.594 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:53.594 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:53.594 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:53.594 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:53.594 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:53.594 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:53.594 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:53.594 Initialization complete. Launching workers. 00:10:53.594 ======================================================== 00:10:53.594 Latency(us) 00:10:53.594 Device Information : IOPS MiB/s Average min max 00:10:53.594 PCIE (0000:00:10.0) NSID 1 from core 1: 5046.55 19.71 3167.97 1619.22 9125.69 00:10:53.594 PCIE (0000:00:11.0) NSID 1 from core 1: 5046.55 19.71 3169.91 1704.96 9590.78 00:10:53.594 PCIE (0000:00:13.0) NSID 1 from core 1: 5046.55 19.71 3169.89 1775.03 9069.02 00:10:53.594 PCIE (0000:00:12.0) NSID 1 from core 1: 5046.55 19.71 3169.98 1675.94 8540.34 00:10:53.594 PCIE (0000:00:12.0) NSID 2 from core 1: 5046.55 19.71 3170.24 1730.80 8718.17 00:10:53.594 PCIE (0000:00:12.0) NSID 3 from core 1: 5046.55 19.71 3170.26 1809.02 8963.13 00:10:53.594 ======================================================== 00:10:53.594 Total : 30279.30 118.28 3169.71 1619.22 9590.78 00:10:53.594 00:10:53.853 Initializing NVMe Controllers 00:10:53.853 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:53.853 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:53.853 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:53.853 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:53.853 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:53.853 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:53.853 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:53.853 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:53.853 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:53.853 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:53.853 Initialization complete. Launching workers. 00:10:53.853 ======================================================== 00:10:53.853 Latency(us) 00:10:53.853 Device Information : IOPS MiB/s Average min max 00:10:53.853 PCIE (0000:00:10.0) NSID 1 from core 2: 3236.84 12.64 4941.78 1244.52 12245.93 00:10:53.853 PCIE (0000:00:11.0) NSID 1 from core 2: 3236.84 12.64 4942.16 1369.17 12685.91 00:10:53.853 PCIE (0000:00:13.0) NSID 1 from core 2: 3236.84 12.64 4942.12 1233.16 13957.07 00:10:53.853 PCIE (0000:00:12.0) NSID 1 from core 2: 3236.84 12.64 4942.42 1228.18 14218.50 00:10:53.853 PCIE (0000:00:12.0) NSID 2 from core 2: 3236.84 12.64 4941.94 1160.57 14099.97 00:10:53.853 PCIE (0000:00:12.0) NSID 3 from core 2: 3236.84 12.64 4942.32 1231.94 13139.89 00:10:53.853 ======================================================== 00:10:53.853 Total : 19421.05 75.86 4942.12 1160.57 14218.50 00:10:53.853 00:10:53.853 03:55:36 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65020 00:10:55.759 Initializing NVMe Controllers 00:10:55.759 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:55.759 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:55.759 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:55.759 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:55.759 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:55.759 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:55.759 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:55.759 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:55.759 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:55.759 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:55.759 Initialization complete. Launching workers. 00:10:55.759 ======================================================== 00:10:55.759 Latency(us) 00:10:55.759 Device Information : IOPS MiB/s Average min max 00:10:55.759 PCIE (0000:00:10.0) NSID 1 from core 0: 8392.21 32.78 1905.05 913.48 8949.90 00:10:55.759 PCIE (0000:00:11.0) NSID 1 from core 0: 8392.21 32.78 1906.06 929.82 8944.48 00:10:55.759 PCIE (0000:00:13.0) NSID 1 from core 0: 8392.21 32.78 1906.02 870.50 8954.70 00:10:55.759 PCIE (0000:00:12.0) NSID 1 from core 0: 8392.21 32.78 1905.99 815.57 8951.52 00:10:55.759 PCIE (0000:00:12.0) NSID 2 from core 0: 8392.21 32.78 1905.97 769.46 9605.09 00:10:55.759 PCIE (0000:00:12.0) NSID 3 from core 0: 8395.40 32.79 1905.22 701.97 8840.00 00:10:55.759 ======================================================== 00:10:55.759 Total : 50356.43 196.70 1905.72 701.97 9605.09 00:10:55.759 00:10:55.759 03:55:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65021 00:10:55.759 03:55:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65090 00:10:55.759 03:55:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:55.759 03:55:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65091 00:10:55.759 03:55:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:55.759 03:55:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:59.050 Initializing NVMe Controllers 00:10:59.050 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:59.050 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:59.050 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:59.050 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:59.050 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:59.050 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:59.050 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:59.050 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:59.050 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:59.050 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:59.050 Initialization complete. Launching workers. 00:10:59.051 ======================================================== 00:10:59.051 Latency(us) 00:10:59.051 Device Information : IOPS MiB/s Average min max 00:10:59.051 PCIE (0000:00:10.0) NSID 1 from core 0: 5263.79 20.56 3037.46 945.76 11327.75 00:10:59.051 PCIE (0000:00:11.0) NSID 1 from core 0: 5263.79 20.56 3039.20 1015.47 11415.79 00:10:59.051 PCIE (0000:00:13.0) NSID 1 from core 0: 5263.79 20.56 3039.30 1008.01 11736.41 00:10:59.051 PCIE (0000:00:12.0) NSID 1 from core 0: 5263.79 20.56 3039.49 972.09 10994.72 00:10:59.051 PCIE (0000:00:12.0) NSID 2 from core 0: 5263.79 20.56 3039.62 956.23 11708.85 00:10:59.051 PCIE (0000:00:12.0) NSID 3 from core 0: 5269.12 20.58 3036.81 941.54 11757.90 00:10:59.051 ======================================================== 00:10:59.051 Total : 31588.05 123.39 3038.65 941.54 11757.90 00:10:59.051 00:10:59.051 Initializing NVMe Controllers 00:10:59.051 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:59.051 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:59.051 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:59.051 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:59.051 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:59.051 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:59.051 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:59.051 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:59.051 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:59.051 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:59.051 Initialization complete. Launching workers. 00:10:59.051 ======================================================== 00:10:59.051 Latency(us) 00:10:59.051 Device Information : IOPS MiB/s Average min max 00:10:59.051 PCIE (0000:00:10.0) NSID 1 from core 1: 5102.85 19.93 3133.04 1009.18 7685.66 00:10:59.051 PCIE (0000:00:11.0) NSID 1 from core 1: 5102.85 19.93 3134.70 1041.81 8308.67 00:10:59.051 PCIE (0000:00:13.0) NSID 1 from core 1: 5102.85 19.93 3134.64 958.92 8186.46 00:10:59.051 PCIE (0000:00:12.0) NSID 1 from core 1: 5102.85 19.93 3134.59 893.98 8264.04 00:10:59.051 PCIE (0000:00:12.0) NSID 2 from core 1: 5102.85 19.93 3134.71 1007.36 8666.20 00:10:59.051 PCIE (0000:00:12.0) NSID 3 from core 1: 5102.85 19.93 3134.68 1007.68 8525.89 00:10:59.051 ======================================================== 00:10:59.051 Total : 30617.09 119.60 3134.39 893.98 8666.20 00:10:59.051 00:11:01.587 Initializing NVMe Controllers 00:11:01.587 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:11:01.587 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:11:01.587 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:11:01.587 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:11:01.587 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:11:01.587 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:11:01.587 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:11:01.587 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:11:01.587 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:11:01.587 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:11:01.587 Initialization complete. Launching workers. 00:11:01.587 ======================================================== 00:11:01.587 Latency(us) 00:11:01.587 Device Information : IOPS MiB/s Average min max 00:11:01.587 PCIE (0000:00:10.0) NSID 1 from core 2: 3244.19 12.67 4929.98 1037.01 17083.83 00:11:01.587 PCIE (0000:00:11.0) NSID 1 from core 2: 3244.19 12.67 4931.16 1022.21 18621.20 00:11:01.587 PCIE (0000:00:13.0) NSID 1 from core 2: 3244.19 12.67 4927.58 1008.24 14875.25 00:11:01.587 PCIE (0000:00:12.0) NSID 1 from core 2: 3244.19 12.67 4927.23 969.01 13976.07 00:11:01.587 PCIE (0000:00:12.0) NSID 2 from core 2: 3244.19 12.67 4927.39 1024.77 12519.83 00:11:01.587 PCIE (0000:00:12.0) NSID 3 from core 2: 3244.19 12.67 4927.07 1047.57 13312.85 00:11:01.587 ======================================================== 00:11:01.587 Total : 19465.12 76.04 4928.40 969.01 18621.20 00:11:01.587 00:11:01.587 ************************************ 00:11:01.587 END TEST nvme_multi_secondary 00:11:01.587 ************************************ 00:11:01.587 03:55:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65090 00:11:01.587 03:55:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65091 00:11:01.587 00:11:01.587 real 0m10.955s 00:11:01.587 user 0m18.550s 00:11:01.587 sys 0m1.028s 00:11:01.587 03:55:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.587 03:55:43 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 03:55:43 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:11:01.587 03:55:43 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:11:01.587 03:55:43 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64028 ]] 00:11:01.587 03:55:43 nvme -- common/autotest_common.sh@1094 -- # kill 64028 00:11:01.587 03:55:43 nvme -- common/autotest_common.sh@1095 -- # wait 64028 00:11:01.587 [2024-12-07 03:55:43.891661] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.892115] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.892207] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.892262] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.898924] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.899354] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.899848] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.900267] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.905056] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.905469] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.905752] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.906032] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.909561] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.909846] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.909969] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 [2024-12-07 03:55:43.910052] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64963) is not found. Dropping the request. 00:11:01.587 03:55:44 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:11:01.587 03:55:44 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:11:01.587 03:55:44 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:01.587 03:55:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:01.587 03:55:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.587 03:55:44 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:01.587 ************************************ 00:11:01.587 START TEST bdev_nvme_reset_stuck_adm_cmd 00:11:01.587 ************************************ 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:11:01.587 * Looking for test storage... 00:11:01.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:01.587 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:01.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.848 --rc genhtml_branch_coverage=1 00:11:01.848 --rc genhtml_function_coverage=1 00:11:01.848 --rc genhtml_legend=1 00:11:01.848 --rc geninfo_all_blocks=1 00:11:01.848 --rc geninfo_unexecuted_blocks=1 00:11:01.848 00:11:01.848 ' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:01.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.848 --rc genhtml_branch_coverage=1 00:11:01.848 --rc genhtml_function_coverage=1 00:11:01.848 --rc genhtml_legend=1 00:11:01.848 --rc geninfo_all_blocks=1 00:11:01.848 --rc geninfo_unexecuted_blocks=1 00:11:01.848 00:11:01.848 ' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:01.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.848 --rc genhtml_branch_coverage=1 00:11:01.848 --rc genhtml_function_coverage=1 00:11:01.848 --rc genhtml_legend=1 00:11:01.848 --rc geninfo_all_blocks=1 00:11:01.848 --rc geninfo_unexecuted_blocks=1 00:11:01.848 00:11:01.848 ' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:01.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:01.848 --rc genhtml_branch_coverage=1 00:11:01.848 --rc genhtml_function_coverage=1 00:11:01.848 --rc genhtml_legend=1 00:11:01.848 --rc geninfo_all_blocks=1 00:11:01.848 --rc geninfo_unexecuted_blocks=1 00:11:01.848 00:11:01.848 ' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65257 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65257 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65257 ']' 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.848 03:55:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:01.848 [2024-12-07 03:55:44.573361] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:11:01.848 [2024-12-07 03:55:44.573632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65257 ] 00:11:02.109 [2024-12-07 03:55:44.773605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:02.369 [2024-12-07 03:55:44.888242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.369 [2024-12-07 03:55:44.888339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.369 [2024-12-07 03:55:44.888466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.369 [2024-12-07 03:55:44.888502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:03.305 nvme0n1 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_ze5n8.txt 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:03.305 true 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733543745 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65280 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:11:03.305 03:55:45 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:05.281 [2024-12-07 03:55:47.886963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:11:05.281 [2024-12-07 03:55:47.887503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:11:05.281 [2024-12-07 03:55:47.887645] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:11:05.281 [2024-12-07 03:55:47.887763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.281 [2024-12-07 03:55:47.889850] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65280 00:11:05.281 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65280 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65280 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_ze5n8.txt 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:05.281 03:55:47 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:11:05.281 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_ze5n8.txt 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65257 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65257 ']' 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65257 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65257 00:11:05.540 killing process with pid 65257 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65257' 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65257 00:11:05.540 03:55:48 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65257 00:11:08.079 03:55:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:08.079 03:55:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:08.079 00:11:08.079 real 0m6.344s 00:11:08.079 user 0m21.976s 00:11:08.079 sys 0m0.853s 00:11:08.079 ************************************ 00:11:08.079 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:08.079 ************************************ 00:11:08.079 03:55:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.079 03:55:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:11:08.079 03:55:50 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:08.079 03:55:50 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:08.079 03:55:50 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:08.079 03:55:50 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.079 03:55:50 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:08.079 ************************************ 00:11:08.079 START TEST nvme_fio 00:11:08.079 ************************************ 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:08.079 03:55:50 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:08.079 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:08.338 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:08.338 03:55:50 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:11:08.597 03:55:51 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:08.597 03:55:51 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:08.597 03:55:51 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:11:08.855 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:08.855 fio-3.35 00:11:08.855 Starting 1 thread 00:11:12.136 00:11:12.136 test: (groupid=0, jobs=1): err= 0: pid=65435: Sat Dec 7 03:55:54 2024 00:11:12.136 read: IOPS=22.1k, BW=86.5MiB/s (90.7MB/s)(173MiB/2001msec) 00:11:12.136 slat (usec): min=3, max=110, avg= 4.59, stdev= 1.31 00:11:12.136 clat (usec): min=187, max=10623, avg=2884.61, stdev=485.42 00:11:12.136 lat (usec): min=191, max=10733, avg=2889.20, stdev=486.15 00:11:12.136 clat percentiles (usec): 00:11:12.136 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:12.136 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:12.136 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 3032], 00:11:12.136 | 99.00th=[ 5211], 99.50th=[ 6980], 99.90th=[ 8455], 99.95th=[ 8586], 00:11:12.136 | 99.99th=[10290] 00:11:12.136 bw ( KiB/s): min=83201, max=90976, per=99.58%, avg=88208.33, stdev=4344.51, samples=3 00:11:12.136 iops : min=20800, max=22744, avg=22052.00, stdev=1086.27, samples=3 00:11:12.136 write: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(172MiB/2001msec); 0 zone resets 00:11:12.136 slat (nsec): min=4071, max=63244, avg=4740.79, stdev=1207.40 00:11:12.136 clat (usec): min=233, max=10405, avg=2888.85, stdev=480.21 00:11:12.136 lat (usec): min=237, max=10426, avg=2893.59, stdev=480.85 00:11:12.136 clat percentiles (usec): 00:11:12.136 | 1.00th=[ 2606], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:12.136 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2835], 00:11:12.136 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2966], 95.00th=[ 3064], 00:11:12.136 | 99.00th=[ 5080], 99.50th=[ 6915], 99.90th=[ 8455], 99.95th=[ 8717], 00:11:12.136 | 99.99th=[10159] 00:11:12.136 bw ( KiB/s): min=83209, max=91208, per=100.00%, avg=88352.33, stdev=4463.30, samples=3 00:11:12.136 iops : min=20802, max=22802, avg=22088.00, stdev=1115.97, samples=3 00:11:12.136 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:12.136 lat (msec) : 2=0.05%, 4=97.89%, 10=2.00%, 20=0.01% 00:11:12.136 cpu : usr=99.25%, sys=0.05%, ctx=24, majf=0, minf=609 00:11:12.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:12.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:12.136 issued rwts: total=44310,44015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.136 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:12.136 00:11:12.136 Run status group 0 (all jobs): 00:11:12.136 READ: bw=86.5MiB/s (90.7MB/s), 86.5MiB/s-86.5MiB/s (90.7MB/s-90.7MB/s), io=173MiB (181MB), run=2001-2001msec 00:11:12.136 WRITE: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=172MiB (180MB), run=2001-2001msec 00:11:12.395 ----------------------------------------------------- 00:11:12.395 Suppressions used: 00:11:12.395 count bytes template 00:11:12.395 1 32 /usr/src/fio/parse.c 00:11:12.395 1 8 libtcmalloc_minimal.so 00:11:12.395 ----------------------------------------------------- 00:11:12.395 00:11:12.395 03:55:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:12.395 03:55:54 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:12.395 03:55:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:12.395 03:55:54 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:12.654 03:55:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:11:12.654 03:55:55 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:12.913 03:55:55 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:12.913 03:55:55 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:12.913 03:55:55 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:11:13.172 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:13.172 fio-3.35 00:11:13.172 Starting 1 thread 00:11:17.361 00:11:17.361 test: (groupid=0, jobs=1): err= 0: pid=65501: Sat Dec 7 03:55:59 2024 00:11:17.361 read: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec) 00:11:17.361 slat (nsec): min=3769, max=58354, avg=4578.20, stdev=1107.42 00:11:17.361 clat (usec): min=223, max=10747, avg=2943.97, stdev=426.97 00:11:17.361 lat (usec): min=228, max=10805, avg=2948.55, stdev=427.46 00:11:17.361 clat percentiles (usec): 00:11:17.361 | 1.00th=[ 2212], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:11:17.361 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:17.361 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3294], 00:11:17.361 | 99.00th=[ 4686], 99.50th=[ 5407], 99.90th=[ 8291], 99.95th=[ 8586], 00:11:17.361 | 99.99th=[10552] 00:11:17.361 bw ( KiB/s): min=85824, max=86440, per=99.23%, avg=86142.33, stdev=308.52, samples=3 00:11:17.361 iops : min=21456, max=21610, avg=21535.33, stdev=77.11, samples=3 00:11:17.361 write: IOPS=21.5k, BW=84.2MiB/s (88.2MB/s)(168MiB/2001msec); 0 zone resets 00:11:17.361 slat (usec): min=3, max=242, avg= 4.73, stdev= 1.56 00:11:17.361 clat (usec): min=185, max=10650, avg=2950.10, stdev=441.90 00:11:17.361 lat (usec): min=190, max=10684, avg=2954.83, stdev=442.36 00:11:17.361 clat percentiles (usec): 00:11:17.361 | 1.00th=[ 2180], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:11:17.361 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:11:17.361 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3294], 00:11:17.361 | 99.00th=[ 4752], 99.50th=[ 5473], 99.90th=[ 8356], 99.95th=[ 8848], 00:11:17.361 | 99.99th=[10159] 00:11:17.361 bw ( KiB/s): min=85748, max=87072, per=100.00%, avg=86278.67, stdev=699.99, samples=3 00:11:17.361 iops : min=21437, max=21768, avg=21569.67, stdev=175.00, samples=3 00:11:17.361 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:17.361 lat (msec) : 2=0.62%, 4=97.01%, 10=2.30%, 20=0.02% 00:11:17.361 cpu : usr=99.30%, sys=0.10%, ctx=4, majf=0, minf=608 00:11:17.361 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:17.361 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.361 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.361 issued rwts: total=43425,43108,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.361 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.361 00:11:17.361 Run status group 0 (all jobs): 00:11:17.361 READ: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:11:17.361 WRITE: bw=84.2MiB/s (88.2MB/s), 84.2MiB/s-84.2MiB/s (88.2MB/s-88.2MB/s), io=168MiB (177MB), run=2001-2001msec 00:11:17.361 ----------------------------------------------------- 00:11:17.361 Suppressions used: 00:11:17.361 count bytes template 00:11:17.361 1 32 /usr/src/fio/parse.c 00:11:17.361 1 8 libtcmalloc_minimal.so 00:11:17.361 ----------------------------------------------------- 00:11:17.361 00:11:17.361 03:55:59 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:17.361 03:55:59 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:17.361 03:55:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:17.361 03:55:59 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:17.621 03:56:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:11:17.621 03:56:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:17.881 03:56:00 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:17.881 03:56:00 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:17.881 03:56:00 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:11:18.140 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:18.140 fio-3.35 00:11:18.140 Starting 1 thread 00:11:22.334 00:11:22.334 test: (groupid=0, jobs=1): err= 0: pid=65562: Sat Dec 7 03:56:04 2024 00:11:22.334 read: IOPS=22.0k, BW=85.8MiB/s (90.0MB/s)(172MiB/2001msec) 00:11:22.334 slat (usec): min=3, max=148, avg= 4.56, stdev= 1.39 00:11:22.334 clat (usec): min=225, max=11580, avg=2907.37, stdev=432.07 00:11:22.334 lat (usec): min=229, max=11645, avg=2911.93, stdev=432.50 00:11:22.334 clat percentiles (usec): 00:11:22.334 | 1.00th=[ 2147], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:22.334 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:22.334 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3261], 00:11:22.334 | 99.00th=[ 4555], 99.50th=[ 5276], 99.90th=[ 8029], 99.95th=[ 9503], 00:11:22.334 | 99.99th=[11469] 00:11:22.334 bw ( KiB/s): min=87008, max=89848, per=100.00%, avg=88725.33, stdev=1510.50, samples=3 00:11:22.334 iops : min=21752, max=22462, avg=22181.33, stdev=377.63, samples=3 00:11:22.334 write: IOPS=21.8k, BW=85.2MiB/s (89.4MB/s)(171MiB/2001msec); 0 zone resets 00:11:22.334 slat (usec): min=3, max=495, avg= 4.73, stdev= 2.65 00:11:22.334 clat (usec): min=269, max=11494, avg=2913.28, stdev=435.30 00:11:22.334 lat (usec): min=273, max=11513, avg=2918.01, stdev=435.78 00:11:22.334 clat percentiles (usec): 00:11:22.334 | 1.00th=[ 2180], 5.00th=[ 2671], 10.00th=[ 2704], 20.00th=[ 2737], 00:11:22.334 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:11:22.334 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3097], 95.00th=[ 3261], 00:11:22.334 | 99.00th=[ 4621], 99.50th=[ 5211], 99.90th=[ 8455], 99.95th=[ 9765], 00:11:22.334 | 99.99th=[11338] 00:11:22.334 bw ( KiB/s): min=86728, max=90472, per=100.00%, avg=88928.00, stdev=1956.31, samples=3 00:11:22.334 iops : min=21682, max=22618, avg=22232.00, stdev=489.08, samples=3 00:11:22.334 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:11:22.334 lat (msec) : 2=0.67%, 4=96.95%, 10=2.30%, 20=0.04% 00:11:22.334 cpu : usr=99.30%, sys=0.10%, ctx=5, majf=0, minf=608 00:11:22.334 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:22.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:22.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:22.334 issued rwts: total=43949,43665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:22.334 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:22.334 00:11:22.334 Run status group 0 (all jobs): 00:11:22.334 READ: bw=85.8MiB/s (90.0MB/s), 85.8MiB/s-85.8MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:11:22.334 WRITE: bw=85.2MiB/s (89.4MB/s), 85.2MiB/s-85.2MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:11:22.334 ----------------------------------------------------- 00:11:22.334 Suppressions used: 00:11:22.334 count bytes template 00:11:22.334 1 32 /usr/src/fio/parse.c 00:11:22.334 1 8 libtcmalloc_minimal.so 00:11:22.334 ----------------------------------------------------- 00:11:22.334 00:11:22.334 03:56:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:22.334 03:56:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:22.334 03:56:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:22.334 03:56:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:22.593 03:56:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:11:22.593 03:56:05 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:22.852 03:56:05 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:11:22.852 03:56:05 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:22.852 03:56:05 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:11:23.111 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:23.111 fio-3.35 00:11:23.111 Starting 1 thread 00:11:28.384 00:11:28.384 test: (groupid=0, jobs=1): err= 0: pid=65628: Sat Dec 7 03:56:10 2024 00:11:28.384 read: IOPS=22.5k, BW=88.0MiB/s (92.2MB/s)(176MiB/2001msec) 00:11:28.384 slat (nsec): min=3741, max=75649, avg=4529.10, stdev=1168.87 00:11:28.384 clat (usec): min=244, max=11905, avg=2833.35, stdev=322.55 00:11:28.384 lat (usec): min=248, max=11981, avg=2837.88, stdev=322.99 00:11:28.384 clat percentiles (usec): 00:11:28.384 | 1.00th=[ 2442], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:28.384 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:28.384 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 2999], 95.00th=[ 3097], 00:11:28.384 | 99.00th=[ 3785], 99.50th=[ 4490], 99.90th=[ 6652], 99.95th=[ 9110], 00:11:28.384 | 99.99th=[11600] 00:11:28.384 bw ( KiB/s): min=86152, max=92208, per=98.62%, avg=88837.33, stdev=3085.62, samples=3 00:11:28.384 iops : min=21538, max=23052, avg=22209.33, stdev=771.40, samples=3 00:11:28.384 write: IOPS=22.4k, BW=87.5MiB/s (91.7MB/s)(175MiB/2001msec); 0 zone resets 00:11:28.384 slat (nsec): min=3890, max=32250, avg=4694.04, stdev=1063.82 00:11:28.384 clat (usec): min=189, max=11713, avg=2839.69, stdev=333.50 00:11:28.384 lat (usec): min=193, max=11735, avg=2844.39, stdev=333.87 00:11:28.384 clat percentiles (usec): 00:11:28.384 | 1.00th=[ 2442], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:11:28.384 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:11:28.384 | 70.00th=[ 2868], 80.00th=[ 2933], 90.00th=[ 3032], 95.00th=[ 3097], 00:11:28.384 | 99.00th=[ 3818], 99.50th=[ 4490], 99.90th=[ 7635], 99.95th=[ 9634], 00:11:28.384 | 99.99th=[11338] 00:11:28.384 bw ( KiB/s): min=85760, max=93088, per=99.37%, avg=89013.33, stdev=3732.40, samples=3 00:11:28.384 iops : min=21440, max=23272, avg=22253.33, stdev=933.10, samples=3 00:11:28.384 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:11:28.384 lat (msec) : 2=0.47%, 4=98.62%, 10=0.82%, 20=0.04% 00:11:28.384 cpu : usr=99.40%, sys=0.05%, ctx=2, majf=0, minf=606 00:11:28.384 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:28.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.384 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:28.384 issued rwts: total=45063,44812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.384 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:28.384 00:11:28.384 Run status group 0 (all jobs): 00:11:28.384 READ: bw=88.0MiB/s (92.2MB/s), 88.0MiB/s-88.0MiB/s (92.2MB/s-92.2MB/s), io=176MiB (185MB), run=2001-2001msec 00:11:28.384 WRITE: bw=87.5MiB/s (91.7MB/s), 87.5MiB/s-87.5MiB/s (91.7MB/s-91.7MB/s), io=175MiB (184MB), run=2001-2001msec 00:11:28.384 ----------------------------------------------------- 00:11:28.384 Suppressions used: 00:11:28.384 count bytes template 00:11:28.384 1 32 /usr/src/fio/parse.c 00:11:28.384 1 8 libtcmalloc_minimal.so 00:11:28.384 ----------------------------------------------------- 00:11:28.384 00:11:28.384 03:56:10 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:28.384 03:56:10 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:28.384 00:11:28.384 real 0m20.259s 00:11:28.384 user 0m15.151s 00:11:28.384 sys 0m5.840s 00:11:28.384 03:56:10 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.384 03:56:10 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:28.384 ************************************ 00:11:28.384 END TEST nvme_fio 00:11:28.384 ************************************ 00:11:28.384 00:11:28.384 real 1m35.595s 00:11:28.384 user 3m42.958s 00:11:28.384 sys 0m25.523s 00:11:28.384 03:56:10 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.384 ************************************ 00:11:28.384 END TEST nvme 00:11:28.384 ************************************ 00:11:28.384 03:56:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:28.384 03:56:10 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:11:28.384 03:56:10 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:28.384 03:56:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:28.384 03:56:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.384 03:56:10 -- common/autotest_common.sh@10 -- # set +x 00:11:28.384 ************************************ 00:11:28.384 START TEST nvme_scc 00:11:28.384 ************************************ 00:11:28.384 03:56:10 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:28.384 * Looking for test storage... 00:11:28.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:28.384 03:56:11 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:28.384 03:56:11 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:28.384 03:56:11 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:28.643 03:56:11 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@345 -- # : 1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.643 03:56:11 nvme_scc -- scripts/common.sh@368 -- # return 0 00:11:28.643 03:56:11 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.643 03:56:11 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:28.643 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.643 --rc genhtml_branch_coverage=1 00:11:28.643 --rc genhtml_function_coverage=1 00:11:28.643 --rc genhtml_legend=1 00:11:28.643 --rc geninfo_all_blocks=1 00:11:28.643 --rc geninfo_unexecuted_blocks=1 00:11:28.643 00:11:28.643 ' 00:11:28.644 03:56:11 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:28.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.644 --rc genhtml_branch_coverage=1 00:11:28.644 --rc genhtml_function_coverage=1 00:11:28.644 --rc genhtml_legend=1 00:11:28.644 --rc geninfo_all_blocks=1 00:11:28.644 --rc geninfo_unexecuted_blocks=1 00:11:28.644 00:11:28.644 ' 00:11:28.644 03:56:11 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:28.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.644 --rc genhtml_branch_coverage=1 00:11:28.644 --rc genhtml_function_coverage=1 00:11:28.644 --rc genhtml_legend=1 00:11:28.644 --rc geninfo_all_blocks=1 00:11:28.644 --rc geninfo_unexecuted_blocks=1 00:11:28.644 00:11:28.644 ' 00:11:28.644 03:56:11 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:28.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.644 --rc genhtml_branch_coverage=1 00:11:28.644 --rc genhtml_function_coverage=1 00:11:28.644 --rc genhtml_legend=1 00:11:28.644 --rc geninfo_all_blocks=1 00:11:28.644 --rc geninfo_unexecuted_blocks=1 00:11:28.644 00:11:28.644 ' 00:11:28.644 03:56:11 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.644 03:56:11 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.644 03:56:11 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.644 03:56:11 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.644 03:56:11 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.644 03:56:11 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.644 03:56:11 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.644 03:56:11 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.644 03:56:11 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:28.644 03:56:11 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:28.644 03:56:11 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:28.644 03:56:11 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:28.644 03:56:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:28.644 03:56:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:28.644 03:56:11 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:28.644 03:56:11 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:29.212 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:29.470 Waiting for block devices as requested 00:11:29.470 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:29.730 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:29.730 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:29.730 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:35.007 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:35.007 03:56:17 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:35.007 03:56:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:35.007 03:56:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:35.007 03:56:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.007 03:56:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.007 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.008 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.009 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.010 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.011 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.012 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:35.279 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:35.280 03:56:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:35.280 03:56:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:35.280 03:56:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.280 03:56:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.280 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.281 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.282 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.283 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.284 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.285 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.286 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:35.287 03:56:17 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:35.288 03:56:17 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:35.288 03:56:17 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:35.288 03:56:17 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.288 03:56:17 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.288 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.289 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:35.290 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.291 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.292 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.293 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.294 03:56:17 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:17 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.295 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.560 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.561 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.562 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.563 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.564 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.565 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:35.566 03:56:18 nvme_scc -- scripts/common.sh@18 -- # local i 00:11:35.566 03:56:18 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:35.566 03:56:18 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:35.566 03:56:18 nvme_scc -- scripts/common.sh@27 -- # return 0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.566 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:35.567 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:35.568 03:56:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:35.568 03:56:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:35.569 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:11:35.569 03:56:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.569 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:11:35.569 03:56:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:11:35.828 03:56:18 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:11:35.828 03:56:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:35.828 03:56:18 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:35.828 03:56:18 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:36.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:37.357 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.357 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.357 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.357 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:37.357 03:56:19 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:37.357 03:56:19 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:37.357 03:56:19 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.357 03:56:19 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:37.357 ************************************ 00:11:37.357 START TEST nvme_simple_copy 00:11:37.357 ************************************ 00:11:37.357 03:56:19 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:37.614 Initializing NVMe Controllers 00:11:37.614 Attaching to 0000:00:10.0 00:11:37.614 Controller supports SCC. Attached to 0000:00:10.0 00:11:37.614 Namespace ID: 1 size: 6GB 00:11:37.614 Initialization complete. 00:11:37.614 00:11:37.614 Controller QEMU NVMe Ctrl (12340 ) 00:11:37.614 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:37.614 Namespace Block Size:4096 00:11:37.614 Writing LBAs 0 to 63 with Random Data 00:11:37.614 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:37.614 LBAs matching Written Data: 64 00:11:37.614 ************************************ 00:11:37.614 END TEST nvme_simple_copy 00:11:37.614 ************************************ 00:11:37.614 00:11:37.614 real 0m0.318s 00:11:37.614 user 0m0.127s 00:11:37.614 sys 0m0.090s 00:11:37.614 03:56:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.614 03:56:20 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:37.872 ************************************ 00:11:37.872 END TEST nvme_scc 00:11:37.872 ************************************ 00:11:37.872 00:11:37.872 real 0m9.431s 00:11:37.872 user 0m1.754s 00:11:37.872 sys 0m2.671s 00:11:37.872 03:56:20 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.872 03:56:20 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:37.872 03:56:20 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:37.872 03:56:20 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:11:37.872 03:56:20 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:11:37.872 03:56:20 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:11:37.872 03:56:20 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:37.872 03:56:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:37.872 03:56:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:37.872 03:56:20 -- common/autotest_common.sh@10 -- # set +x 00:11:37.872 ************************************ 00:11:37.872 START TEST nvme_fdp 00:11:37.872 ************************************ 00:11:37.872 03:56:20 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:11:37.872 * Looking for test storage... 00:11:37.872 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:37.872 03:56:20 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:37.872 03:56:20 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:11:37.872 03:56:20 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.131 03:56:20 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:11:38.131 03:56:20 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.131 03:56:20 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.131 --rc genhtml_branch_coverage=1 00:11:38.131 --rc genhtml_function_coverage=1 00:11:38.131 --rc genhtml_legend=1 00:11:38.131 --rc geninfo_all_blocks=1 00:11:38.131 --rc geninfo_unexecuted_blocks=1 00:11:38.131 00:11:38.131 ' 00:11:38.131 03:56:20 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.131 --rc genhtml_branch_coverage=1 00:11:38.131 --rc genhtml_function_coverage=1 00:11:38.131 --rc genhtml_legend=1 00:11:38.131 --rc geninfo_all_blocks=1 00:11:38.131 --rc geninfo_unexecuted_blocks=1 00:11:38.131 00:11:38.131 ' 00:11:38.131 03:56:20 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.131 --rc genhtml_branch_coverage=1 00:11:38.131 --rc genhtml_function_coverage=1 00:11:38.131 --rc genhtml_legend=1 00:11:38.131 --rc geninfo_all_blocks=1 00:11:38.131 --rc geninfo_unexecuted_blocks=1 00:11:38.131 00:11:38.131 ' 00:11:38.131 03:56:20 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.131 --rc genhtml_branch_coverage=1 00:11:38.131 --rc genhtml_function_coverage=1 00:11:38.131 --rc genhtml_legend=1 00:11:38.131 --rc geninfo_all_blocks=1 00:11:38.131 --rc geninfo_unexecuted_blocks=1 00:11:38.131 00:11:38.131 ' 00:11:38.131 03:56:20 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.131 03:56:20 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.131 03:56:20 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.131 03:56:20 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.131 03:56:20 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.131 03:56:20 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:38.131 03:56:20 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:38.131 03:56:20 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:38.131 03:56:20 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:38.131 03:56:20 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:38.697 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:38.955 Waiting for block devices as requested 00:11:38.955 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.213 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.213 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.472 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:44.824 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:44.824 03:56:27 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:44.824 03:56:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:44.824 03:56:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:44.824 03:56:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:44.824 03:56:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:44.824 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.825 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.826 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.827 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.828 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.829 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:44.830 03:56:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:44.830 03:56:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:44.830 03:56:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:44.830 03:56:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.830 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:44.831 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.832 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.833 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.834 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.835 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:44.836 03:56:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:44.836 03:56:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:44.836 03:56:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:44.836 03:56:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:44.836 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.837 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.838 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:11:44.839 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.840 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:11:44.841 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:11:44.842 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:11:44.843 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:44.844 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.126 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.127 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.128 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.129 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:45.130 03:56:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:45.131 03:56:27 nvme_fdp -- scripts/common.sh@18 -- # local i 00:11:45.131 03:56:27 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:45.131 03:56:27 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:45.131 03:56:27 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.131 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:45.132 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:45.133 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:11:45.134 03:56:27 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:45.134 03:56:27 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:45.134 03:56:27 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:45.703 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:46.643 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.643 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.643 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.643 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:46.902 03:56:29 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:46.902 03:56:29 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.902 03:56:29 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.902 03:56:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:46.902 ************************************ 00:11:46.902 START TEST nvme_flexible_data_placement 00:11:46.902 ************************************ 00:11:46.902 03:56:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:47.162 Initializing NVMe Controllers 00:11:47.162 Attaching to 0000:00:13.0 00:11:47.163 Controller supports FDP Attached to 0000:00:13.0 00:11:47.163 Namespace ID: 1 Endurance Group ID: 1 00:11:47.163 Initialization complete. 00:11:47.163 00:11:47.163 ================================== 00:11:47.163 == FDP tests for Namespace: #01 == 00:11:47.163 ================================== 00:11:47.163 00:11:47.163 Get Feature: FDP: 00:11:47.163 ================= 00:11:47.163 Enabled: Yes 00:11:47.163 FDP configuration Index: 0 00:11:47.163 00:11:47.163 FDP configurations log page 00:11:47.163 =========================== 00:11:47.163 Number of FDP configurations: 1 00:11:47.163 Version: 0 00:11:47.163 Size: 112 00:11:47.163 FDP Configuration Descriptor: 0 00:11:47.163 Descriptor Size: 96 00:11:47.163 Reclaim Group Identifier format: 2 00:11:47.163 FDP Volatile Write Cache: Not Present 00:11:47.163 FDP Configuration: Valid 00:11:47.163 Vendor Specific Size: 0 00:11:47.163 Number of Reclaim Groups: 2 00:11:47.163 Number of Recalim Unit Handles: 8 00:11:47.163 Max Placement Identifiers: 128 00:11:47.163 Number of Namespaces Suppprted: 256 00:11:47.163 Reclaim unit Nominal Size: 6000000 bytes 00:11:47.163 Estimated Reclaim Unit Time Limit: Not Reported 00:11:47.163 RUH Desc #000: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #001: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #002: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #003: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #004: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #005: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #006: RUH Type: Initially Isolated 00:11:47.163 RUH Desc #007: RUH Type: Initially Isolated 00:11:47.163 00:11:47.163 FDP reclaim unit handle usage log page 00:11:47.163 ====================================== 00:11:47.163 Number of Reclaim Unit Handles: 8 00:11:47.163 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:47.163 RUH Usage Desc #001: RUH Attributes: Unused 00:11:47.163 RUH Usage Desc #002: RUH Attributes: Unused 00:11:47.163 RUH Usage Desc #003: RUH Attributes: Unused 00:11:47.163 RUH Usage Desc #004: RUH Attributes: Unused 00:11:47.163 RUH Usage Desc #005: RUH Attributes: Unused 00:11:47.163 RUH Usage Desc #006: RUH Attributes: Unused 00:11:47.163 RUH Usage Desc #007: RUH Attributes: Unused 00:11:47.163 00:11:47.163 FDP statistics log page 00:11:47.163 ======================= 00:11:47.163 Host bytes with metadata written: 1044017152 00:11:47.163 Media bytes with metadata written: 1044140032 00:11:47.163 Media bytes erased: 0 00:11:47.163 00:11:47.163 FDP Reclaim unit handle status 00:11:47.163 ============================== 00:11:47.163 Number of RUHS descriptors: 2 00:11:47.163 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003c59 00:11:47.163 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:47.163 00:11:47.163 FDP write on placement id: 0 success 00:11:47.163 00:11:47.163 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:47.163 00:11:47.163 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:47.163 00:11:47.163 Get Feature: FDP Events for Placement handle: #0 00:11:47.163 ======================== 00:11:47.163 Number of FDP Events: 6 00:11:47.163 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:47.163 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:47.163 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:47.163 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:47.163 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:47.163 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:47.163 00:11:47.163 FDP events log page 00:11:47.163 =================== 00:11:47.163 Number of FDP events: 1 00:11:47.163 FDP Event #0: 00:11:47.163 Event Type: RU Not Written to Capacity 00:11:47.163 Placement Identifier: Valid 00:11:47.163 NSID: Valid 00:11:47.163 Location: Valid 00:11:47.163 Placement Identifier: 0 00:11:47.163 Event Timestamp: 7 00:11:47.163 Namespace Identifier: 1 00:11:47.163 Reclaim Group Identifier: 0 00:11:47.163 Reclaim Unit Handle Identifier: 0 00:11:47.163 00:11:47.163 FDP test passed 00:11:47.163 00:11:47.163 real 0m0.294s 00:11:47.163 user 0m0.089s 00:11:47.163 sys 0m0.103s 00:11:47.163 03:56:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.163 03:56:29 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:47.163 ************************************ 00:11:47.163 END TEST nvme_flexible_data_placement 00:11:47.163 ************************************ 00:11:47.163 00:11:47.163 real 0m9.321s 00:11:47.163 user 0m1.706s 00:11:47.163 sys 0m2.661s 00:11:47.163 03:56:29 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.163 03:56:29 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:47.163 ************************************ 00:11:47.163 END TEST nvme_fdp 00:11:47.163 ************************************ 00:11:47.163 03:56:29 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:11:47.163 03:56:29 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:47.163 03:56:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:47.163 03:56:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.163 03:56:29 -- common/autotest_common.sh@10 -- # set +x 00:11:47.163 ************************************ 00:11:47.163 START TEST nvme_rpc 00:11:47.163 ************************************ 00:11:47.163 03:56:29 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:47.422 * Looking for test storage... 00:11:47.422 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:47.422 03:56:29 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.422 03:56:29 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.422 03:56:29 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.422 03:56:30 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.422 03:56:30 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.423 03:56:30 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.423 --rc genhtml_branch_coverage=1 00:11:47.423 --rc genhtml_function_coverage=1 00:11:47.423 --rc genhtml_legend=1 00:11:47.423 --rc geninfo_all_blocks=1 00:11:47.423 --rc geninfo_unexecuted_blocks=1 00:11:47.423 00:11:47.423 ' 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.423 --rc genhtml_branch_coverage=1 00:11:47.423 --rc genhtml_function_coverage=1 00:11:47.423 --rc genhtml_legend=1 00:11:47.423 --rc geninfo_all_blocks=1 00:11:47.423 --rc geninfo_unexecuted_blocks=1 00:11:47.423 00:11:47.423 ' 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.423 --rc genhtml_branch_coverage=1 00:11:47.423 --rc genhtml_function_coverage=1 00:11:47.423 --rc genhtml_legend=1 00:11:47.423 --rc geninfo_all_blocks=1 00:11:47.423 --rc geninfo_unexecuted_blocks=1 00:11:47.423 00:11:47.423 ' 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.423 --rc genhtml_branch_coverage=1 00:11:47.423 --rc genhtml_function_coverage=1 00:11:47.423 --rc genhtml_legend=1 00:11:47.423 --rc geninfo_all_blocks=1 00:11:47.423 --rc geninfo_unexecuted_blocks=1 00:11:47.423 00:11:47.423 ' 00:11:47.423 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:47.423 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:47.423 03:56:30 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:11:47.683 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:47.683 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67039 00:11:47.683 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:47.683 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:47.683 03:56:30 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67039 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67039 ']' 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.683 03:56:30 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.683 [2024-12-07 03:56:30.293135] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:11:47.683 [2024-12-07 03:56:30.293258] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67039 ] 00:11:47.942 [2024-12-07 03:56:30.476270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:47.942 [2024-12-07 03:56:30.584515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.942 [2024-12-07 03:56:30.584546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.879 03:56:31 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.879 03:56:31 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:48.879 03:56:31 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:49.154 Nvme0n1 00:11:49.154 03:56:31 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:49.154 03:56:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:49.154 request: 00:11:49.154 { 00:11:49.154 "bdev_name": "Nvme0n1", 00:11:49.154 "filename": "non_existing_file", 00:11:49.154 "method": "bdev_nvme_apply_firmware", 00:11:49.154 "req_id": 1 00:11:49.154 } 00:11:49.154 Got JSON-RPC error response 00:11:49.154 response: 00:11:49.154 { 00:11:49.154 "code": -32603, 00:11:49.154 "message": "open file failed." 00:11:49.154 } 00:11:49.154 03:56:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:49.154 03:56:31 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:49.154 03:56:31 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:49.414 03:56:32 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:49.414 03:56:32 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67039 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67039 ']' 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67039 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67039 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.414 killing process with pid 67039 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67039' 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67039 00:11:49.414 03:56:32 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67039 00:11:51.951 00:11:51.951 real 0m4.463s 00:11:51.951 user 0m8.076s 00:11:51.951 sys 0m0.760s 00:11:51.951 03:56:34 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.951 03:56:34 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:51.951 ************************************ 00:11:51.951 END TEST nvme_rpc 00:11:51.951 ************************************ 00:11:51.951 03:56:34 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:51.951 03:56:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.951 03:56:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.951 03:56:34 -- common/autotest_common.sh@10 -- # set +x 00:11:51.951 ************************************ 00:11:51.951 START TEST nvme_rpc_timeouts 00:11:51.951 ************************************ 00:11:51.951 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:51.951 * Looking for test storage... 00:11:51.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:51.951 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.952 03:56:34 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.952 --rc genhtml_branch_coverage=1 00:11:51.952 --rc genhtml_function_coverage=1 00:11:51.952 --rc genhtml_legend=1 00:11:51.952 --rc geninfo_all_blocks=1 00:11:51.952 --rc geninfo_unexecuted_blocks=1 00:11:51.952 00:11:51.952 ' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.952 --rc genhtml_branch_coverage=1 00:11:51.952 --rc genhtml_function_coverage=1 00:11:51.952 --rc genhtml_legend=1 00:11:51.952 --rc geninfo_all_blocks=1 00:11:51.952 --rc geninfo_unexecuted_blocks=1 00:11:51.952 00:11:51.952 ' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.952 --rc genhtml_branch_coverage=1 00:11:51.952 --rc genhtml_function_coverage=1 00:11:51.952 --rc genhtml_legend=1 00:11:51.952 --rc geninfo_all_blocks=1 00:11:51.952 --rc geninfo_unexecuted_blocks=1 00:11:51.952 00:11:51.952 ' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.952 --rc genhtml_branch_coverage=1 00:11:51.952 --rc genhtml_function_coverage=1 00:11:51.952 --rc genhtml_legend=1 00:11:51.952 --rc geninfo_all_blocks=1 00:11:51.952 --rc geninfo_unexecuted_blocks=1 00:11:51.952 00:11:51.952 ' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67116 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67116 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67152 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:51.952 03:56:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67152 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67152 ']' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.952 03:56:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:52.211 [2024-12-07 03:56:34.732633] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:11:52.211 [2024-12-07 03:56:34.732771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67152 ] 00:11:52.211 [2024-12-07 03:56:34.926500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:52.472 [2024-12-07 03:56:35.036263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.472 [2024-12-07 03:56:35.036297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:53.409 03:56:35 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:53.409 03:56:35 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:11:53.409 Checking default timeout settings: 00:11:53.409 03:56:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:53.409 03:56:35 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:53.671 Making settings changes with rpc: 00:11:53.671 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:53.671 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:53.671 Check default vs. modified settings: 00:11:53.671 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:53.671 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:54.240 Setting action_on_timeout is changed as expected. 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:54.240 Setting timeout_us is changed as expected. 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:54.240 Setting timeout_admin_us is changed as expected. 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67116 /tmp/settings_modified_67116 00:11:54.240 03:56:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67152 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67152 ']' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67152 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67152 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.240 killing process with pid 67152 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67152' 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67152 00:11:54.240 03:56:36 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67152 00:11:56.802 RPC TIMEOUT SETTING TEST PASSED. 00:11:56.802 03:56:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:56.802 00:11:56.802 real 0m4.777s 00:11:56.802 user 0m8.875s 00:11:56.802 sys 0m0.809s 00:11:56.802 03:56:39 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.802 03:56:39 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:56.802 ************************************ 00:11:56.802 END TEST nvme_rpc_timeouts 00:11:56.802 ************************************ 00:11:56.802 03:56:39 -- spdk/autotest.sh@239 -- # uname -s 00:11:56.802 03:56:39 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:56.802 03:56:39 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:56.802 03:56:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:56.802 03:56:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:56.802 03:56:39 -- common/autotest_common.sh@10 -- # set +x 00:11:56.802 ************************************ 00:11:56.802 START TEST sw_hotplug 00:11:56.802 ************************************ 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:56.802 * Looking for test storage... 00:11:56.802 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.802 03:56:39 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.802 03:56:39 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:56.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.803 --rc genhtml_branch_coverage=1 00:11:56.803 --rc genhtml_function_coverage=1 00:11:56.803 --rc genhtml_legend=1 00:11:56.803 --rc geninfo_all_blocks=1 00:11:56.803 --rc geninfo_unexecuted_blocks=1 00:11:56.803 00:11:56.803 ' 00:11:56.803 03:56:39 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:56.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.803 --rc genhtml_branch_coverage=1 00:11:56.803 --rc genhtml_function_coverage=1 00:11:56.803 --rc genhtml_legend=1 00:11:56.803 --rc geninfo_all_blocks=1 00:11:56.803 --rc geninfo_unexecuted_blocks=1 00:11:56.803 00:11:56.803 ' 00:11:56.803 03:56:39 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:56.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.803 --rc genhtml_branch_coverage=1 00:11:56.803 --rc genhtml_function_coverage=1 00:11:56.803 --rc genhtml_legend=1 00:11:56.803 --rc geninfo_all_blocks=1 00:11:56.803 --rc geninfo_unexecuted_blocks=1 00:11:56.803 00:11:56.803 ' 00:11:56.803 03:56:39 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:56.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.803 --rc genhtml_branch_coverage=1 00:11:56.803 --rc genhtml_function_coverage=1 00:11:56.803 --rc genhtml_legend=1 00:11:56.803 --rc geninfo_all_blocks=1 00:11:56.803 --rc geninfo_unexecuted_blocks=1 00:11:56.803 00:11:56.803 ' 00:11:56.803 03:56:39 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:57.369 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:57.628 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.628 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.628 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.628 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:57.628 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:57.628 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:57.628 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:57.628 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:57.628 03:56:40 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:57.628 03:56:40 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:57.628 03:56:40 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:57.628 03:56:40 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:57.628 03:56:40 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:57.628 03:56:40 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:57.889 03:56:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:57.890 03:56:40 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:57.890 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:57.890 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:57.890 03:56:40 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:58.460 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:58.719 Waiting for block devices as requested 00:11:58.719 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.978 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:58.978 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:59.238 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:12:04.522 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:12:04.522 03:56:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:12:04.522 03:56:46 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:04.782 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:12:05.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:05.041 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:12:05.301 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:12:05.561 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.561 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:12:05.821 03:56:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68040 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:05.821 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:12:05.821 03:56:48 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:05.821 03:56:48 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:05.821 03:56:48 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:06.081 03:56:48 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:06.081 03:56:48 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:12:06.081 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:06.081 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:06.081 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:12:06.081 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:06.081 03:56:48 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:06.081 Initializing NVMe Controllers 00:12:06.081 Attaching to 0000:00:10.0 00:12:06.081 Attaching to 0000:00:11.0 00:12:06.081 Attached to 0000:00:11.0 00:12:06.081 Attached to 0000:00:10.0 00:12:06.081 Initialization complete. Starting I/O... 00:12:06.081 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:12:06.081 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:12:06.081 00:12:07.464 QEMU NVMe Ctrl (12341 ): 1600 I/Os completed (+1600) 00:12:07.464 QEMU NVMe Ctrl (12340 ): 1600 I/Os completed (+1600) 00:12:07.464 00:12:08.401 QEMU NVMe Ctrl (12341 ): 3776 I/Os completed (+2176) 00:12:08.401 QEMU NVMe Ctrl (12340 ): 3777 I/Os completed (+2177) 00:12:08.401 00:12:09.536 QEMU NVMe Ctrl (12341 ): 6044 I/Os completed (+2268) 00:12:09.536 QEMU NVMe Ctrl (12340 ): 6045 I/Os completed (+2268) 00:12:09.536 00:12:10.140 QEMU NVMe Ctrl (12341 ): 8272 I/Os completed (+2228) 00:12:10.140 QEMU NVMe Ctrl (12340 ): 8273 I/Os completed (+2228) 00:12:10.140 00:12:11.079 QEMU NVMe Ctrl (12341 ): 10508 I/Os completed (+2236) 00:12:11.079 QEMU NVMe Ctrl (12340 ): 10509 I/Os completed (+2236) 00:12:11.079 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.020 [2024-12-07 03:56:54.564323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:12.020 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:12.020 [2024-12-07 03:56:54.567918] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.568013] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.568039] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.568061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:12.020 [2024-12-07 03:56:54.570654] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.570705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.570723] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.570741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:12.020 [2024-12-07 03:56:54.610349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:12.020 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:12.020 [2024-12-07 03:56:54.611888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.611945] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.611974] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.611993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:12.020 [2024-12-07 03:56:54.614490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.614527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.614549] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 [2024-12-07 03:56:54.614566] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:12.020 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:12.020 EAL: Scan for (pci) bus failed. 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.020 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:12.280 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:12.280 Attaching to 0000:00:10.0 00:12:12.280 Attached to 0000:00:10.0 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:12.280 03:56:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:12.280 Attaching to 0000:00:11.0 00:12:12.280 Attached to 0000:00:11.0 00:12:13.219 QEMU NVMe Ctrl (12340 ): 1868 I/Os completed (+1868) 00:12:13.219 QEMU NVMe Ctrl (12341 ): 1694 I/Os completed (+1694) 00:12:13.219 00:12:14.157 QEMU NVMe Ctrl (12340 ): 3864 I/Os completed (+1996) 00:12:14.157 QEMU NVMe Ctrl (12341 ): 3698 I/Os completed (+2004) 00:12:14.157 00:12:15.094 QEMU NVMe Ctrl (12340 ): 5864 I/Os completed (+2000) 00:12:15.094 QEMU NVMe Ctrl (12341 ): 5702 I/Os completed (+2004) 00:12:15.094 00:12:16.472 QEMU NVMe Ctrl (12340 ): 7848 I/Os completed (+1984) 00:12:16.473 QEMU NVMe Ctrl (12341 ): 7686 I/Os completed (+1984) 00:12:16.473 00:12:17.408 QEMU NVMe Ctrl (12340 ): 9832 I/Os completed (+1984) 00:12:17.408 QEMU NVMe Ctrl (12341 ): 9674 I/Os completed (+1988) 00:12:17.408 00:12:18.343 QEMU NVMe Ctrl (12340 ): 11776 I/Os completed (+1944) 00:12:18.343 QEMU NVMe Ctrl (12341 ): 11620 I/Os completed (+1946) 00:12:18.343 00:12:19.280 QEMU NVMe Ctrl (12340 ): 13688 I/Os completed (+1912) 00:12:19.280 QEMU NVMe Ctrl (12341 ): 13536 I/Os completed (+1916) 00:12:19.280 00:12:20.218 QEMU NVMe Ctrl (12340 ): 15612 I/Os completed (+1924) 00:12:20.218 QEMU NVMe Ctrl (12341 ): 15463 I/Os completed (+1927) 00:12:20.218 00:12:21.158 QEMU NVMe Ctrl (12340 ): 17528 I/Os completed (+1916) 00:12:21.158 QEMU NVMe Ctrl (12341 ): 17379 I/Os completed (+1916) 00:12:21.158 00:12:22.099 QEMU NVMe Ctrl (12340 ): 19420 I/Os completed (+1892) 00:12:22.099 QEMU NVMe Ctrl (12341 ): 19275 I/Os completed (+1896) 00:12:22.099 00:12:23.036 QEMU NVMe Ctrl (12340 ): 21384 I/Os completed (+1964) 00:12:23.036 QEMU NVMe Ctrl (12341 ): 21244 I/Os completed (+1969) 00:12:23.036 00:12:24.415 QEMU NVMe Ctrl (12340 ): 23352 I/Os completed (+1968) 00:12:24.415 QEMU NVMe Ctrl (12341 ): 23217 I/Os completed (+1973) 00:12:24.415 00:12:24.415 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:24.415 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:24.415 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.415 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.415 [2024-12-07 03:57:06.922166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:24.415 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:24.415 [2024-12-07 03:57:06.924119] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.415 [2024-12-07 03:57:06.924190] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.415 [2024-12-07 03:57:06.924217] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.415 [2024-12-07 03:57:06.924243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.415 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:24.415 [2024-12-07 03:57:06.927334] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.927398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.927420] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.927443] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:24.416 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:24.416 [2024-12-07 03:57:06.962116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:24.416 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:24.416 [2024-12-07 03:57:06.963832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.963885] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.963917] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.963955] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:24.416 [2024-12-07 03:57:06.966743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.966790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.966814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 [2024-12-07 03:57:06.966837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.416 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:24.416 03:57:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:24.416 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:12:24.416 EAL: Scan for (pci) bus failed. 00:12:24.416 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.416 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.416 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.676 Attaching to 0000:00:10.0 00:12:24.676 Attached to 0000:00:10.0 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.676 03:57:07 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:24.676 Attaching to 0000:00:11.0 00:12:24.676 Attached to 0000:00:11.0 00:12:25.244 QEMU NVMe Ctrl (12340 ): 1068 I/Os completed (+1068) 00:12:25.244 QEMU NVMe Ctrl (12341 ): 896 I/Os completed (+896) 00:12:25.244 00:12:26.183 QEMU NVMe Ctrl (12340 ): 3048 I/Os completed (+1980) 00:12:26.183 QEMU NVMe Ctrl (12341 ): 2876 I/Os completed (+1980) 00:12:26.183 00:12:27.122 QEMU NVMe Ctrl (12340 ): 5044 I/Os completed (+1996) 00:12:27.122 QEMU NVMe Ctrl (12341 ): 4874 I/Os completed (+1998) 00:12:27.122 00:12:28.077 QEMU NVMe Ctrl (12340 ): 7064 I/Os completed (+2020) 00:12:28.077 QEMU NVMe Ctrl (12341 ): 6896 I/Os completed (+2022) 00:12:28.077 00:12:29.455 QEMU NVMe Ctrl (12340 ): 9272 I/Os completed (+2208) 00:12:29.455 QEMU NVMe Ctrl (12341 ): 9104 I/Os completed (+2208) 00:12:29.455 00:12:30.026 QEMU NVMe Ctrl (12340 ): 11484 I/Os completed (+2212) 00:12:30.026 QEMU NVMe Ctrl (12341 ): 11316 I/Os completed (+2212) 00:12:30.026 00:12:31.408 QEMU NVMe Ctrl (12340 ): 13696 I/Os completed (+2212) 00:12:31.408 QEMU NVMe Ctrl (12341 ): 13528 I/Os completed (+2212) 00:12:31.408 00:12:32.347 QEMU NVMe Ctrl (12340 ): 15916 I/Os completed (+2220) 00:12:32.347 QEMU NVMe Ctrl (12341 ): 15748 I/Os completed (+2220) 00:12:32.347 00:12:33.283 QEMU NVMe Ctrl (12340 ): 18132 I/Os completed (+2216) 00:12:33.283 QEMU NVMe Ctrl (12341 ): 17964 I/Os completed (+2216) 00:12:33.283 00:12:34.231 QEMU NVMe Ctrl (12340 ): 20352 I/Os completed (+2220) 00:12:34.231 QEMU NVMe Ctrl (12341 ): 20184 I/Os completed (+2220) 00:12:34.231 00:12:35.169 QEMU NVMe Ctrl (12340 ): 22564 I/Os completed (+2212) 00:12:35.169 QEMU NVMe Ctrl (12341 ): 22396 I/Os completed (+2212) 00:12:35.169 00:12:36.106 QEMU NVMe Ctrl (12340 ): 24780 I/Os completed (+2216) 00:12:36.106 QEMU NVMe Ctrl (12341 ): 24612 I/Os completed (+2216) 00:12:36.106 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:36.673 [2024-12-07 03:57:19.315474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:36.673 Controller removed: QEMU NVMe Ctrl (12340 ) 00:12:36.673 [2024-12-07 03:57:19.317166] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.317225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.317248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.317271] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:36.673 [2024-12-07 03:57:19.320090] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.320142] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.320161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.320180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:36.673 [2024-12-07 03:57:19.357899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:36.673 Controller removed: QEMU NVMe Ctrl (12341 ) 00:12:36.673 [2024-12-07 03:57:19.359444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.359493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.359516] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.359536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:36.673 [2024-12-07 03:57:19.361993] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.362035] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.362059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 [2024-12-07 03:57:19.362075] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:12:36.673 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:36.932 Attaching to 0000:00:10.0 00:12:36.932 Attached to 0000:00:10.0 00:12:36.932 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:37.191 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:37.191 03:57:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:37.191 Attaching to 0000:00:11.0 00:12:37.191 Attached to 0000:00:11.0 00:12:37.191 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:12:37.191 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:12:37.191 [2024-12-07 03:57:19.695981] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:49.409 03:57:31 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:12:49.409 03:57:31 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:49.409 03:57:31 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.13 00:12:49.409 03:57:31 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.13 00:12:49.409 03:57:31 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:49.409 03:57:31 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.13 00:12:49.409 03:57:31 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.13 2 00:12:49.409 remove_attach_helper took 43.13s to complete (handling 2 nvme drive(s)) 03:57:31 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68040 00:12:55.987 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68040) - No such process 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68040 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68591 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:55.987 03:57:37 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68591 00:12:55.987 03:57:37 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68591 ']' 00:12:55.987 03:57:37 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.987 03:57:37 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.987 03:57:37 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.987 03:57:37 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.987 03:57:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:55.987 [2024-12-07 03:57:37.814785] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:12:55.988 [2024-12-07 03:57:37.814949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68591 ] 00:12:55.988 [2024-12-07 03:57:37.993834] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.988 [2024-12-07 03:57:38.097743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:56.246 03:57:38 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:56.246 03:57:38 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:02.939 03:57:44 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:02.939 03:57:44 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.939 03:57:44 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.939 03:57:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.939 03:57:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.939 [2024-12-07 03:57:45.043960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:02.939 [2024-12-07 03:57:45.046655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.046702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.046723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 [2024-12-07 03:57:45.046750] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.046762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.046777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 [2024-12-07 03:57:45.046790] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.046803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.046815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 [2024-12-07 03:57:45.046835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.046846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.046860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 03:57:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:02.939 [2024-12-07 03:57:45.443279] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:02.939 [2024-12-07 03:57:45.445533] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.445572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.445605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 [2024-12-07 03:57:45.445625] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.445639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.445651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 [2024-12-07 03:57:45.445665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.445676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.445690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 [2024-12-07 03:57:45.445702] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:02.939 [2024-12-07 03:57:45.445715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:02.939 [2024-12-07 03:57:45.445726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:02.939 03:57:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.939 03:57:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:02.939 03:57:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:02.939 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:03.197 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:03.455 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:03.455 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:03.455 03:57:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:15.669 03:57:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:15.669 03:57:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:15.669 03:57:57 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:15.669 03:57:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:15.669 03:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:15.669 03:57:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:15.669 03:57:57 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.669 03:57:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 03:57:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:15.669 03:57:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.669 03:57:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.669 [2024-12-07 03:57:58.122880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:15.669 [2024-12-07 03:57:58.125240] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.669 [2024-12-07 03:57:58.125287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.669 [2024-12-07 03:57:58.125304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.669 [2024-12-07 03:57:58.125339] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.669 [2024-12-07 03:57:58.125354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.669 [2024-12-07 03:57:58.125370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.669 [2024-12-07 03:57:58.125383] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.669 [2024-12-07 03:57:58.125396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.669 [2024-12-07 03:57:58.125408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.669 [2024-12-07 03:57:58.125423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.669 [2024-12-07 03:57:58.125434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.669 [2024-12-07 03:57:58.125448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.669 03:57:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:13:15.669 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:15.928 [2024-12-07 03:57:58.522210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:15.928 [2024-12-07 03:57:58.524441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.928 [2024-12-07 03:57:58.524480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.928 [2024-12-07 03:57:58.524500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.928 [2024-12-07 03:57:58.524518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.928 [2024-12-07 03:57:58.524532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.928 [2024-12-07 03:57:58.524543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.928 [2024-12-07 03:57:58.524557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.929 [2024-12-07 03:57:58.524567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.929 [2024-12-07 03:57:58.524580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.929 [2024-12-07 03:57:58.524592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:15.929 [2024-12-07 03:57:58.524605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:15.929 [2024-12-07 03:57:58.524616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:15.929 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:13:15.929 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:15.929 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:15.929 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:15.929 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:15.929 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:15.929 03:57:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.929 03:57:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:15.929 03:57:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:16.188 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:16.447 03:57:58 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:16.447 03:57:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:16.447 03:57:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:28.654 03:58:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.654 03:58:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.654 03:58:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.654 [2024-12-07 03:58:11.101964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:28.654 [2024-12-07 03:58:11.104476] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.654 [2024-12-07 03:58:11.104538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.654 [2024-12-07 03:58:11.104556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.654 [2024-12-07 03:58:11.104580] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.654 [2024-12-07 03:58:11.104592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.654 [2024-12-07 03:58:11.104609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.654 [2024-12-07 03:58:11.104622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.654 [2024-12-07 03:58:11.104636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.654 [2024-12-07 03:58:11.104647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.654 [2024-12-07 03:58:11.104662] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.654 [2024-12-07 03:58:11.104673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.654 [2024-12-07 03:58:11.104687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:28.654 03:58:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.654 03:58:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:28.654 03:58:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:28.654 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:28.914 [2024-12-07 03:58:11.501319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:28.914 [2024-12-07 03:58:11.503587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.914 [2024-12-07 03:58:11.503629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.914 [2024-12-07 03:58:11.503648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.914 [2024-12-07 03:58:11.503669] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.914 [2024-12-07 03:58:11.503682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.914 [2024-12-07 03:58:11.503694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.914 [2024-12-07 03:58:11.503709] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.914 [2024-12-07 03:58:11.503719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.914 [2024-12-07 03:58:11.503735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:28.914 [2024-12-07 03:58:11.503747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:28.914 [2024-12-07 03:58:11.503760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:28.914 [2024-12-07 03:58:11.503771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:29.173 03:58:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.173 03:58:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:29.173 03:58:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:29.173 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:29.174 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:29.174 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:29.433 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:29.433 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:29.433 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:29.433 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:29.433 03:58:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:29.433 03:58:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:29.433 03:58:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:29.433 03:58:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.15 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.15 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.15 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.15 2 00:13:41.670 remove_attach_helper took 45.15s to complete (handling 2 nvme drive(s)) 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:13:41.670 03:58:24 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:13:41.670 03:58:24 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.241 [2024-12-07 03:58:30.233544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.241 03:58:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.241 03:58:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.241 [2024-12-07 03:58:30.235159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.235375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.235399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 [2024-12-07 03:58:30.235423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.235436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.235452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 [2024-12-07 03:58:30.235465] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.235481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.235494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 [2024-12-07 03:58:30.235510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.235522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.235540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 03:58:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:48.241 [2024-12-07 03:58:30.632894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:48.241 [2024-12-07 03:58:30.634511] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.634550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.634570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 [2024-12-07 03:58:30.634590] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.634604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.634616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 [2024-12-07 03:58:30.634632] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.634644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.634659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 [2024-12-07 03:58:30.634672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:48.241 [2024-12-07 03:58:30.634685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:48.241 [2024-12-07 03:58:30.634697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:48.241 03:58:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.241 03:58:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:48.241 03:58:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:48.241 03:58:30 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:48.500 03:58:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:00.752 03:58:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.752 03:58:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:00.752 03:58:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:00.752 03:58:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.752 03:58:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:00.752 [2024-12-07 03:58:43.312488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:00.752 [2024-12-07 03:58:43.314671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.752 [2024-12-07 03:58:43.314805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.752 [2024-12-07 03:58:43.314879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.752 [2024-12-07 03:58:43.314967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.752 [2024-12-07 03:58:43.315119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.752 [2024-12-07 03:58:43.315206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.752 [2024-12-07 03:58:43.315263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.752 [2024-12-07 03:58:43.315315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.752 [2024-12-07 03:58:43.315365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.752 [2024-12-07 03:58:43.315527] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:00.752 [2024-12-07 03:58:43.315609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:00.752 [2024-12-07 03:58:43.315669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:00.752 03:58:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:14:00.752 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:01.011 [2024-12-07 03:58:43.711844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:01.011 [2024-12-07 03:58:43.715724] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.011 [2024-12-07 03:58:43.715821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.011 [2024-12-07 03:58:43.715892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.011 [2024-12-07 03:58:43.715967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.011 [2024-12-07 03:58:43.716022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.011 [2024-12-07 03:58:43.716068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.011 [2024-12-07 03:58:43.716115] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.011 [2024-12-07 03:58:43.716157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.011 [2024-12-07 03:58:43.716203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.011 [2024-12-07 03:58:43.716248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:01.011 [2024-12-07 03:58:43.716292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:01.011 [2024-12-07 03:58:43.716334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:01.270 03:58:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.270 03:58:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:01.270 03:58:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:01.270 03:58:43 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:01.530 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:01.788 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:01.788 03:58:44 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:14.050 03:58:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.050 03:58:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 03:58:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:14:14.050 [2024-12-07 03:58:56.391436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:14:14.050 [2024-12-07 03:58:56.393265] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.050 [2024-12-07 03:58:56.393309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.050 [2024-12-07 03:58:56.393326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.050 [2024-12-07 03:58:56.393352] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.050 [2024-12-07 03:58:56.393364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.050 [2024-12-07 03:58:56.393381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.050 [2024-12-07 03:58:56.393394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.050 [2024-12-07 03:58:56.393412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.050 [2024-12-07 03:58:56.393423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.050 [2024-12-07 03:58:56.393439] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.050 [2024-12-07 03:58:56.393451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.050 [2024-12-07 03:58:56.393465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:14.050 03:58:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.050 03:58:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:14.050 03:58:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:14:14.050 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:14:14.309 [2024-12-07 03:58:56.790783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:14:14.309 [2024-12-07 03:58:56.792405] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.309 [2024-12-07 03:58:56.792444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.309 [2024-12-07 03:58:56.792464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.309 [2024-12-07 03:58:56.792485] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.309 [2024-12-07 03:58:56.792499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.309 [2024-12-07 03:58:56.792512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.309 [2024-12-07 03:58:56.792529] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.309 [2024-12-07 03:58:56.792540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.309 [2024-12-07 03:58:56.792554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.309 [2024-12-07 03:58:56.792567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:14:14.309 [2024-12-07 03:58:56.792584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:14.309 [2024-12-07 03:58:56.792596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:14.309 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:14:14.309 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:14:14.309 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:14:14.310 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:14.310 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:14.310 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:14.310 03:58:56 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.310 03:58:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:14.310 03:58:56 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.310 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:14:14.310 03:58:56 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:14:14.569 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:14:14.827 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:14:14.827 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:14:14.827 03:58:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.23 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.23 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.23 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.23 2 00:14:27.032 remove_attach_helper took 45.23s to complete (handling 2 nvme drive(s)) 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:14:27.032 03:59:09 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68591 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68591 ']' 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68591 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68591 00:14:27.032 killing process with pid 68591 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68591' 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68591 00:14:27.032 03:59:09 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68591 00:14:29.572 03:59:11 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:29.831 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:30.401 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.401 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:14:30.401 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:30.401 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:30.401 00:14:30.401 real 2m33.894s 00:14:30.401 user 1m51.593s 00:14:30.401 sys 0m22.508s 00:14:30.401 03:59:13 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:30.401 ************************************ 00:14:30.401 END TEST sw_hotplug 00:14:30.401 ************************************ 00:14:30.401 03:59:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:14:30.661 03:59:13 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:14:30.661 03:59:13 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:30.661 03:59:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:30.661 03:59:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:30.661 03:59:13 -- common/autotest_common.sh@10 -- # set +x 00:14:30.661 ************************************ 00:14:30.661 START TEST nvme_xnvme 00:14:30.661 ************************************ 00:14:30.661 03:59:13 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:14:30.661 * Looking for test storage... 00:14:30.661 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:30.661 03:59:13 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:30.661 03:59:13 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:30.661 03:59:13 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:30.924 03:59:13 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.924 --rc genhtml_branch_coverage=1 00:14:30.924 --rc genhtml_function_coverage=1 00:14:30.924 --rc genhtml_legend=1 00:14:30.924 --rc geninfo_all_blocks=1 00:14:30.924 --rc geninfo_unexecuted_blocks=1 00:14:30.924 00:14:30.924 ' 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.924 --rc genhtml_branch_coverage=1 00:14:30.924 --rc genhtml_function_coverage=1 00:14:30.924 --rc genhtml_legend=1 00:14:30.924 --rc geninfo_all_blocks=1 00:14:30.924 --rc geninfo_unexecuted_blocks=1 00:14:30.924 00:14:30.924 ' 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.924 --rc genhtml_branch_coverage=1 00:14:30.924 --rc genhtml_function_coverage=1 00:14:30.924 --rc genhtml_legend=1 00:14:30.924 --rc geninfo_all_blocks=1 00:14:30.924 --rc geninfo_unexecuted_blocks=1 00:14:30.924 00:14:30.924 ' 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:30.924 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:30.924 --rc genhtml_branch_coverage=1 00:14:30.924 --rc genhtml_function_coverage=1 00:14:30.924 --rc genhtml_legend=1 00:14:30.924 --rc geninfo_all_blocks=1 00:14:30.924 --rc geninfo_unexecuted_blocks=1 00:14:30.924 00:14:30.924 ' 00:14:30.924 03:59:13 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:14:30.924 03:59:13 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:14:30.924 03:59:13 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:14:30.924 03:59:13 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:14:30.925 03:59:13 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:14:30.925 03:59:13 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:14:30.925 03:59:13 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:14:30.925 #define SPDK_CONFIG_H 00:14:30.925 #define SPDK_CONFIG_AIO_FSDEV 1 00:14:30.925 #define SPDK_CONFIG_APPS 1 00:14:30.925 #define SPDK_CONFIG_ARCH native 00:14:30.925 #define SPDK_CONFIG_ASAN 1 00:14:30.925 #undef SPDK_CONFIG_AVAHI 00:14:30.925 #undef SPDK_CONFIG_CET 00:14:30.925 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:14:30.925 #define SPDK_CONFIG_COVERAGE 1 00:14:30.925 #define SPDK_CONFIG_CROSS_PREFIX 00:14:30.925 #undef SPDK_CONFIG_CRYPTO 00:14:30.925 #undef SPDK_CONFIG_CRYPTO_MLX5 00:14:30.925 #undef SPDK_CONFIG_CUSTOMOCF 00:14:30.925 #undef SPDK_CONFIG_DAOS 00:14:30.925 #define SPDK_CONFIG_DAOS_DIR 00:14:30.925 #define SPDK_CONFIG_DEBUG 1 00:14:30.925 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:14:30.925 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:30.925 #define SPDK_CONFIG_DPDK_INC_DIR 00:14:30.925 #define SPDK_CONFIG_DPDK_LIB_DIR 00:14:30.925 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:14:30.925 #undef SPDK_CONFIG_DPDK_UADK 00:14:30.925 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:30.925 #define SPDK_CONFIG_EXAMPLES 1 00:14:30.925 #undef SPDK_CONFIG_FC 00:14:30.925 #define SPDK_CONFIG_FC_PATH 00:14:30.925 #define SPDK_CONFIG_FIO_PLUGIN 1 00:14:30.925 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:14:30.925 #define SPDK_CONFIG_FSDEV 1 00:14:30.925 #undef SPDK_CONFIG_FUSE 00:14:30.925 #undef SPDK_CONFIG_FUZZER 00:14:30.925 #define SPDK_CONFIG_FUZZER_LIB 00:14:30.925 #undef SPDK_CONFIG_GOLANG 00:14:30.925 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:14:30.925 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:14:30.925 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:14:30.925 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:14:30.925 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:14:30.925 #undef SPDK_CONFIG_HAVE_LIBBSD 00:14:30.925 #undef SPDK_CONFIG_HAVE_LZ4 00:14:30.925 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:14:30.925 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:14:30.925 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:14:30.925 #define SPDK_CONFIG_IDXD 1 00:14:30.925 #define SPDK_CONFIG_IDXD_KERNEL 1 00:14:30.925 #undef SPDK_CONFIG_IPSEC_MB 00:14:30.925 #define SPDK_CONFIG_IPSEC_MB_DIR 00:14:30.925 #define SPDK_CONFIG_ISAL 1 00:14:30.925 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:14:30.925 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:14:30.925 #define SPDK_CONFIG_LIBDIR 00:14:30.925 #undef SPDK_CONFIG_LTO 00:14:30.925 #define SPDK_CONFIG_MAX_LCORES 128 00:14:30.925 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:14:30.925 #define SPDK_CONFIG_NVME_CUSE 1 00:14:30.925 #undef SPDK_CONFIG_OCF 00:14:30.925 #define SPDK_CONFIG_OCF_PATH 00:14:30.925 #define SPDK_CONFIG_OPENSSL_PATH 00:14:30.925 #undef SPDK_CONFIG_PGO_CAPTURE 00:14:30.925 #define SPDK_CONFIG_PGO_DIR 00:14:30.925 #undef SPDK_CONFIG_PGO_USE 00:14:30.925 #define SPDK_CONFIG_PREFIX /usr/local 00:14:30.925 #undef SPDK_CONFIG_RAID5F 00:14:30.925 #undef SPDK_CONFIG_RBD 00:14:30.925 #define SPDK_CONFIG_RDMA 1 00:14:30.925 #define SPDK_CONFIG_RDMA_PROV verbs 00:14:30.925 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:14:30.925 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:14:30.925 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:14:30.925 #define SPDK_CONFIG_SHARED 1 00:14:30.925 #undef SPDK_CONFIG_SMA 00:14:30.925 #define SPDK_CONFIG_TESTS 1 00:14:30.925 #undef SPDK_CONFIG_TSAN 00:14:30.925 #define SPDK_CONFIG_UBLK 1 00:14:30.925 #define SPDK_CONFIG_UBSAN 1 00:14:30.925 #undef SPDK_CONFIG_UNIT_TESTS 00:14:30.925 #undef SPDK_CONFIG_URING 00:14:30.925 #define SPDK_CONFIG_URING_PATH 00:14:30.925 #undef SPDK_CONFIG_URING_ZNS 00:14:30.925 #undef SPDK_CONFIG_USDT 00:14:30.925 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:14:30.925 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:14:30.925 #undef SPDK_CONFIG_VFIO_USER 00:14:30.925 #define SPDK_CONFIG_VFIO_USER_DIR 00:14:30.926 #define SPDK_CONFIG_VHOST 1 00:14:30.926 #define SPDK_CONFIG_VIRTIO 1 00:14:30.926 #undef SPDK_CONFIG_VTUNE 00:14:30.926 #define SPDK_CONFIG_VTUNE_DIR 00:14:30.926 #define SPDK_CONFIG_WERROR 1 00:14:30.926 #define SPDK_CONFIG_WPDK_DIR 00:14:30.926 #define SPDK_CONFIG_XNVME 1 00:14:30.926 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:14:30.926 03:59:13 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:30.926 03:59:13 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:30.926 03:59:13 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.926 03:59:13 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.926 03:59:13 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.926 03:59:13 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.926 03:59:13 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.926 03:59:13 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.926 03:59:13 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:30.926 03:59:13 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@68 -- # uname -s 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:14:30.926 03:59:13 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:14:30.926 03:59:13 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 69941 ]] 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 69941 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.s4KxHX 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:14:30.927 03:59:13 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.s4KxHX/tests/xnvme /tmp/spdk.s4KxHX 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974454272 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593595904 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13974454272 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5593595904 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266277888 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=97950609408 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=1752170496 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:14:30.928 * Looking for test storage... 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:14:30.928 03:59:13 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13974454272 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:31.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.189 03:59:13 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:31.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.189 --rc genhtml_branch_coverage=1 00:14:31.189 --rc genhtml_function_coverage=1 00:14:31.189 --rc genhtml_legend=1 00:14:31.189 --rc geninfo_all_blocks=1 00:14:31.189 --rc geninfo_unexecuted_blocks=1 00:14:31.189 00:14:31.189 ' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:31.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.189 --rc genhtml_branch_coverage=1 00:14:31.189 --rc genhtml_function_coverage=1 00:14:31.189 --rc genhtml_legend=1 00:14:31.189 --rc geninfo_all_blocks=1 00:14:31.189 --rc geninfo_unexecuted_blocks=1 00:14:31.189 00:14:31.189 ' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:31.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.189 --rc genhtml_branch_coverage=1 00:14:31.189 --rc genhtml_function_coverage=1 00:14:31.189 --rc genhtml_legend=1 00:14:31.189 --rc geninfo_all_blocks=1 00:14:31.189 --rc geninfo_unexecuted_blocks=1 00:14:31.189 00:14:31.189 ' 00:14:31.189 03:59:13 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:31.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.190 --rc genhtml_branch_coverage=1 00:14:31.190 --rc genhtml_function_coverage=1 00:14:31.190 --rc genhtml_legend=1 00:14:31.190 --rc geninfo_all_blocks=1 00:14:31.190 --rc geninfo_unexecuted_blocks=1 00:14:31.190 00:14:31.190 ' 00:14:31.190 03:59:13 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.190 03:59:13 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.190 03:59:13 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.190 03:59:13 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.190 03:59:13 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.190 03:59:13 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.190 03:59:13 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.190 03:59:13 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.190 03:59:13 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:14:31.190 03:59:13 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:14:31.190 03:59:13 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:31.760 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:32.019 Waiting for block devices as requested 00:14:32.019 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:32.279 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:32.279 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:32.539 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:37.808 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:37.808 03:59:20 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:14:38.080 03:59:20 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:14:38.080 03:59:20 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:14:38.339 03:59:20 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:14:38.339 03:59:20 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:14:38.339 No valid GPT data, bailing 00:14:38.339 03:59:20 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:14:38.339 03:59:20 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:14:38.339 03:59:20 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:14:38.339 03:59:20 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:38.340 03:59:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:38.340 03:59:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:38.340 03:59:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:38.340 03:59:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:38.340 ************************************ 00:14:38.340 START TEST xnvme_rpc 00:14:38.340 ************************************ 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70349 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70349 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70349 ']' 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:38.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:38.340 03:59:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:38.340 [2024-12-07 03:59:21.037400] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:14:38.340 [2024-12-07 03:59:21.037524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70349 ] 00:14:38.599 [2024-12-07 03:59:21.218640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.599 [2024-12-07 03:59:21.321947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.539 xnvme_bdev 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.539 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70349 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70349 ']' 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70349 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70349 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.799 killing process with pid 70349 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70349' 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70349 00:14:39.799 03:59:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70349 00:14:42.339 00:14:42.339 real 0m3.726s 00:14:42.339 user 0m3.725s 00:14:42.339 sys 0m0.563s 00:14:42.339 03:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.339 03:59:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:42.339 ************************************ 00:14:42.339 END TEST xnvme_rpc 00:14:42.339 ************************************ 00:14:42.339 03:59:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:42.339 03:59:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:42.339 03:59:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.339 03:59:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:42.339 ************************************ 00:14:42.339 START TEST xnvme_bdevperf 00:14:42.339 ************************************ 00:14:42.339 03:59:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:42.339 03:59:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:42.339 03:59:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:42.339 03:59:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:42.339 03:59:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:42.339 03:59:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:42.340 03:59:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:42.340 03:59:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:42.340 { 00:14:42.340 "subsystems": [ 00:14:42.340 { 00:14:42.340 "subsystem": "bdev", 00:14:42.340 "config": [ 00:14:42.340 { 00:14:42.340 "params": { 00:14:42.340 "io_mechanism": "libaio", 00:14:42.340 "conserve_cpu": false, 00:14:42.340 "filename": "/dev/nvme0n1", 00:14:42.340 "name": "xnvme_bdev" 00:14:42.340 }, 00:14:42.340 "method": "bdev_xnvme_create" 00:14:42.340 }, 00:14:42.340 { 00:14:42.340 "method": "bdev_wait_for_examine" 00:14:42.340 } 00:14:42.340 ] 00:14:42.340 } 00:14:42.340 ] 00:14:42.340 } 00:14:42.340 [2024-12-07 03:59:24.824636] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:14:42.340 [2024-12-07 03:59:24.824743] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70429 ] 00:14:42.340 [2024-12-07 03:59:25.004452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.599 [2024-12-07 03:59:25.119891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.857 Running I/O for 5 seconds... 00:14:45.174 43462.00 IOPS, 169.77 MiB/s [2024-12-07T03:59:28.479Z] 43636.50 IOPS, 170.46 MiB/s [2024-12-07T03:59:29.859Z] 43802.00 IOPS, 171.10 MiB/s [2024-12-07T03:59:30.798Z] 43526.75 IOPS, 170.03 MiB/s 00:14:48.062 Latency(us) 00:14:48.062 [2024-12-07T03:59:30.798Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.062 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:48.062 xnvme_bdev : 5.00 43658.64 170.54 0.00 0.00 1462.62 145.58 4553.30 00:14:48.062 [2024-12-07T03:59:30.798Z] =================================================================================================================== 00:14:48.062 [2024-12-07T03:59:30.798Z] Total : 43658.64 170.54 0.00 0.00 1462.62 145.58 4553.30 00:14:49.001 03:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.001 03:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.001 03:59:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:49.001 03:59:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.001 03:59:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.001 { 00:14:49.001 "subsystems": [ 00:14:49.001 { 00:14:49.001 "subsystem": "bdev", 00:14:49.001 "config": [ 00:14:49.001 { 00:14:49.001 "params": { 00:14:49.001 "io_mechanism": "libaio", 00:14:49.001 "conserve_cpu": false, 00:14:49.001 "filename": "/dev/nvme0n1", 00:14:49.001 "name": "xnvme_bdev" 00:14:49.001 }, 00:14:49.001 "method": "bdev_xnvme_create" 00:14:49.001 }, 00:14:49.001 { 00:14:49.001 "method": "bdev_wait_for_examine" 00:14:49.001 } 00:14:49.001 ] 00:14:49.001 } 00:14:49.001 ] 00:14:49.001 } 00:14:49.001 [2024-12-07 03:59:31.659016] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:14:49.001 [2024-12-07 03:59:31.659300] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70504 ] 00:14:49.263 [2024-12-07 03:59:31.839969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.263 [2024-12-07 03:59:31.947950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.833 Running I/O for 5 seconds... 00:14:51.724 43375.00 IOPS, 169.43 MiB/s [2024-12-07T03:59:35.393Z] 42822.50 IOPS, 167.28 MiB/s [2024-12-07T03:59:36.324Z] 42833.67 IOPS, 167.32 MiB/s [2024-12-07T03:59:37.697Z] 43200.00 IOPS, 168.75 MiB/s 00:14:54.961 Latency(us) 00:14:54.961 [2024-12-07T03:59:37.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:54.961 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:54.961 xnvme_bdev : 5.00 43431.61 169.65 0.00 0.00 1470.17 176.01 4921.78 00:14:54.961 [2024-12-07T03:59:37.697Z] =================================================================================================================== 00:14:54.961 [2024-12-07T03:59:37.697Z] Total : 43431.61 169.65 0.00 0.00 1470.17 176.01 4921.78 00:14:55.905 ************************************ 00:14:55.905 END TEST xnvme_bdevperf 00:14:55.905 ************************************ 00:14:55.905 00:14:55.905 real 0m13.703s 00:14:55.905 user 0m4.863s 00:14:55.905 sys 0m5.865s 00:14:55.905 03:59:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:55.905 03:59:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:55.905 03:59:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:55.905 03:59:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:55.905 03:59:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:55.905 03:59:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:55.905 ************************************ 00:14:55.905 START TEST xnvme_fio_plugin 00:14:55.905 ************************************ 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:55.905 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:55.906 { 00:14:55.906 "subsystems": [ 00:14:55.906 { 00:14:55.906 "subsystem": "bdev", 00:14:55.906 "config": [ 00:14:55.906 { 00:14:55.906 "params": { 00:14:55.906 "io_mechanism": "libaio", 00:14:55.906 "conserve_cpu": false, 00:14:55.906 "filename": "/dev/nvme0n1", 00:14:55.906 "name": "xnvme_bdev" 00:14:55.906 }, 00:14:55.906 "method": "bdev_xnvme_create" 00:14:55.906 }, 00:14:55.906 { 00:14:55.906 "method": "bdev_wait_for_examine" 00:14:55.906 } 00:14:55.906 ] 00:14:55.906 } 00:14:55.906 ] 00:14:55.906 } 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:55.906 03:59:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:56.165 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:56.165 fio-3.35 00:14:56.165 Starting 1 thread 00:15:02.795 00:15:02.795 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70629: Sat Dec 7 03:59:44 2024 00:15:02.795 read: IOPS=49.3k, BW=193MiB/s (202MB/s)(963MiB/5001msec) 00:15:02.795 slat (usec): min=4, max=764, avg=17.85, stdev=22.87 00:15:02.795 clat (usec): min=59, max=6350, avg=772.05, stdev=488.13 00:15:02.795 lat (usec): min=121, max=6354, avg=789.90, stdev=491.14 00:15:02.795 clat percentiles (usec): 00:15:02.795 | 1.00th=[ 161], 5.00th=[ 235], 10.00th=[ 306], 20.00th=[ 416], 00:15:02.795 | 30.00th=[ 506], 40.00th=[ 603], 50.00th=[ 693], 60.00th=[ 791], 00:15:02.795 | 70.00th=[ 898], 80.00th=[ 1037], 90.00th=[ 1237], 95.00th=[ 1483], 00:15:02.795 | 99.00th=[ 2835], 99.50th=[ 3392], 99.90th=[ 4359], 99.95th=[ 4752], 00:15:02.795 | 99.99th=[ 5407] 00:15:02.795 bw ( KiB/s): min=168528, max=219752, per=97.76%, avg=192716.44, stdev=19049.79, samples=9 00:15:02.795 iops : min=42132, max=54938, avg=48179.11, stdev=4762.45, samples=9 00:15:02.795 lat (usec) : 100=0.07%, 250=5.94%, 500=23.14%, 750=27.06%, 1000=21.59% 00:15:02.795 lat (msec) : 2=19.65%, 4=2.34%, 10=0.21% 00:15:02.795 cpu : usr=24.90%, sys=53.46%, ctx=64, majf=0, minf=764 00:15:02.795 IO depths : 1=0.1%, 2=0.9%, 4=3.6%, 8=10.6%, 16=25.9%, 32=57.0%, >=64=1.8% 00:15:02.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.795 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:02.795 issued rwts: total=246461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.795 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:02.795 00:15:02.795 Run status group 0 (all jobs): 00:15:02.795 READ: bw=193MiB/s (202MB/s), 193MiB/s-193MiB/s (202MB/s-202MB/s), io=963MiB (1010MB), run=5001-5001msec 00:15:03.362 ----------------------------------------------------- 00:15:03.362 Suppressions used: 00:15:03.362 count bytes template 00:15:03.362 1 11 /usr/src/fio/parse.c 00:15:03.362 1 8 libtcmalloc_minimal.so 00:15:03.362 1 904 libcrypto.so 00:15:03.362 ----------------------------------------------------- 00:15:03.362 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:03.362 { 00:15:03.362 "subsystems": [ 00:15:03.362 { 00:15:03.362 "subsystem": "bdev", 00:15:03.362 "config": [ 00:15:03.362 { 00:15:03.362 "params": { 00:15:03.362 "io_mechanism": "libaio", 00:15:03.362 "conserve_cpu": false, 00:15:03.362 "filename": "/dev/nvme0n1", 00:15:03.362 "name": "xnvme_bdev" 00:15:03.362 }, 00:15:03.362 "method": "bdev_xnvme_create" 00:15:03.362 }, 00:15:03.362 { 00:15:03.362 "method": "bdev_wait_for_examine" 00:15:03.362 } 00:15:03.362 ] 00:15:03.362 } 00:15:03.362 ] 00:15:03.362 } 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:03.362 03:59:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:03.632 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:03.632 fio-3.35 00:15:03.632 Starting 1 thread 00:15:10.203 00:15:10.203 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70726: Sat Dec 7 03:59:51 2024 00:15:10.203 write: IOPS=49.7k, BW=194MiB/s (204MB/s)(971MiB/5001msec); 0 zone resets 00:15:10.203 slat (usec): min=4, max=3175, avg=17.38, stdev=24.78 00:15:10.203 clat (usec): min=90, max=5123, avg=779.11, stdev=464.48 00:15:10.203 lat (usec): min=140, max=5175, avg=796.49, stdev=467.54 00:15:10.203 clat percentiles (usec): 00:15:10.203 | 1.00th=[ 182], 5.00th=[ 258], 10.00th=[ 326], 20.00th=[ 441], 00:15:10.203 | 30.00th=[ 537], 40.00th=[ 627], 50.00th=[ 717], 60.00th=[ 807], 00:15:10.203 | 70.00th=[ 906], 80.00th=[ 1020], 90.00th=[ 1188], 95.00th=[ 1418], 00:15:10.203 | 99.00th=[ 2802], 99.50th=[ 3359], 99.90th=[ 4228], 99.95th=[ 4424], 00:15:10.203 | 99.99th=[ 4817] 00:15:10.203 bw ( KiB/s): min=192368, max=210056, per=100.00%, avg=199503.00, stdev=6087.68, samples=9 00:15:10.203 iops : min=48092, max=52514, avg=49875.67, stdev=1521.91, samples=9 00:15:10.203 lat (usec) : 100=0.03%, 250=4.40%, 500=21.70%, 750=27.65%, 1000=24.92% 00:15:10.203 lat (msec) : 2=18.91%, 4=2.21%, 10=0.18% 00:15:10.203 cpu : usr=28.14%, sys=51.86%, ctx=111, majf=0, minf=765 00:15:10.203 IO depths : 1=0.1%, 2=0.8%, 4=3.4%, 8=10.0%, 16=25.4%, 32=58.4%, >=64=1.9% 00:15:10.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.203 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:10.203 issued rwts: total=0,248494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.203 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:10.203 00:15:10.203 Run status group 0 (all jobs): 00:15:10.203 WRITE: bw=194MiB/s (204MB/s), 194MiB/s-194MiB/s (204MB/s-204MB/s), io=971MiB (1018MB), run=5001-5001msec 00:15:10.772 ----------------------------------------------------- 00:15:10.772 Suppressions used: 00:15:10.772 count bytes template 00:15:10.772 1 11 /usr/src/fio/parse.c 00:15:10.772 1 8 libtcmalloc_minimal.so 00:15:10.772 1 904 libcrypto.so 00:15:10.772 ----------------------------------------------------- 00:15:10.772 00:15:10.772 00:15:10.772 real 0m14.759s 00:15:10.772 user 0m6.310s 00:15:10.772 sys 0m6.016s 00:15:10.772 ************************************ 00:15:10.772 END TEST xnvme_fio_plugin 00:15:10.773 ************************************ 00:15:10.773 03:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.773 03:59:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 03:59:53 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:10.773 03:59:53 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:10.773 03:59:53 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:10.773 03:59:53 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:10.773 03:59:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:10.773 03:59:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.773 03:59:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 ************************************ 00:15:10.773 START TEST xnvme_rpc 00:15:10.773 ************************************ 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70812 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70812 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70812 ']' 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:10.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:10.773 03:59:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.773 [2024-12-07 03:59:53.444519] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:15:10.773 [2024-12-07 03:59:53.444901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70812 ] 00:15:11.032 [2024-12-07 03:59:53.627579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.032 [2024-12-07 03:59:53.736596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.966 xnvme_bdev 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:11.966 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70812 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70812 ']' 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70812 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70812 00:15:12.223 killing process with pid 70812 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70812' 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70812 00:15:12.223 03:59:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70812 00:15:14.747 ************************************ 00:15:14.747 END TEST xnvme_rpc 00:15:14.747 ************************************ 00:15:14.747 00:15:14.747 real 0m3.845s 00:15:14.747 user 0m3.851s 00:15:14.747 sys 0m0.548s 00:15:14.747 03:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:14.747 03:59:57 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:14.747 03:59:57 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:14.747 03:59:57 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:14.747 03:59:57 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:14.747 03:59:57 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:14.747 ************************************ 00:15:14.747 START TEST xnvme_bdevperf 00:15:14.747 ************************************ 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:14.747 03:59:57 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:14.747 { 00:15:14.747 "subsystems": [ 00:15:14.747 { 00:15:14.747 "subsystem": "bdev", 00:15:14.747 "config": [ 00:15:14.747 { 00:15:14.747 "params": { 00:15:14.747 "io_mechanism": "libaio", 00:15:14.747 "conserve_cpu": true, 00:15:14.747 "filename": "/dev/nvme0n1", 00:15:14.747 "name": "xnvme_bdev" 00:15:14.747 }, 00:15:14.747 "method": "bdev_xnvme_create" 00:15:14.747 }, 00:15:14.747 { 00:15:14.747 "method": "bdev_wait_for_examine" 00:15:14.747 } 00:15:14.747 ] 00:15:14.747 } 00:15:14.747 ] 00:15:14.747 } 00:15:14.747 [2024-12-07 03:59:57.351011] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:15:14.747 [2024-12-07 03:59:57.351151] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70892 ] 00:15:15.006 [2024-12-07 03:59:57.530503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.006 [2024-12-07 03:59:57.641269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.265 Running I/O for 5 seconds... 00:15:17.568 44136.00 IOPS, 172.41 MiB/s [2024-12-07T04:00:01.236Z] 40329.50 IOPS, 157.54 MiB/s [2024-12-07T04:00:02.169Z] 40812.67 IOPS, 159.42 MiB/s [2024-12-07T04:00:03.102Z] 41435.00 IOPS, 161.86 MiB/s 00:15:20.366 Latency(us) 00:15:20.366 [2024-12-07T04:00:03.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.366 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:20.366 xnvme_bdev : 5.00 41896.81 163.66 0.00 0.00 1524.00 195.75 8369.66 00:15:20.366 [2024-12-07T04:00:03.102Z] =================================================================================================================== 00:15:20.366 [2024-12-07T04:00:03.102Z] Total : 41896.81 163.66 0.00 0.00 1524.00 195.75 8369.66 00:15:21.743 04:00:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:21.743 04:00:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:21.743 04:00:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:21.743 04:00:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:21.743 04:00:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:21.743 { 00:15:21.743 "subsystems": [ 00:15:21.743 { 00:15:21.743 "subsystem": "bdev", 00:15:21.743 "config": [ 00:15:21.743 { 00:15:21.743 "params": { 00:15:21.743 "io_mechanism": "libaio", 00:15:21.743 "conserve_cpu": true, 00:15:21.743 "filename": "/dev/nvme0n1", 00:15:21.743 "name": "xnvme_bdev" 00:15:21.743 }, 00:15:21.743 "method": "bdev_xnvme_create" 00:15:21.743 }, 00:15:21.743 { 00:15:21.743 "method": "bdev_wait_for_examine" 00:15:21.743 } 00:15:21.743 ] 00:15:21.743 } 00:15:21.743 ] 00:15:21.743 } 00:15:21.743 [2024-12-07 04:00:04.229522] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:15:21.743 [2024-12-07 04:00:04.229656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70977 ] 00:15:21.743 [2024-12-07 04:00:04.409846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.029 [2024-12-07 04:00:04.520404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.288 Running I/O for 5 seconds... 00:15:24.158 44926.00 IOPS, 175.49 MiB/s [2024-12-07T04:00:08.270Z] 40291.00 IOPS, 157.39 MiB/s [2024-12-07T04:00:09.205Z] 41557.33 IOPS, 162.33 MiB/s [2024-12-07T04:00:10.141Z] 42228.50 IOPS, 164.96 MiB/s 00:15:27.405 Latency(us) 00:15:27.405 [2024-12-07T04:00:10.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.405 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:27.405 xnvme_bdev : 5.00 41760.73 163.13 0.00 0.00 1528.90 165.32 5395.53 00:15:27.405 [2024-12-07T04:00:10.141Z] =================================================================================================================== 00:15:27.405 [2024-12-07T04:00:10.141Z] Total : 41760.73 163.13 0.00 0.00 1528.90 165.32 5395.53 00:15:28.344 00:15:28.344 real 0m13.759s 00:15:28.344 user 0m5.072s 00:15:28.344 sys 0m5.906s 00:15:28.344 04:00:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.344 04:00:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:28.344 ************************************ 00:15:28.344 END TEST xnvme_bdevperf 00:15:28.344 ************************************ 00:15:28.604 04:00:11 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:28.604 04:00:11 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:28.604 04:00:11 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.604 04:00:11 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.604 ************************************ 00:15:28.604 START TEST xnvme_fio_plugin 00:15:28.604 ************************************ 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:28.604 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:28.605 04:00:11 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:28.605 { 00:15:28.605 "subsystems": [ 00:15:28.605 { 00:15:28.605 "subsystem": "bdev", 00:15:28.605 "config": [ 00:15:28.605 { 00:15:28.605 "params": { 00:15:28.605 "io_mechanism": "libaio", 00:15:28.605 "conserve_cpu": true, 00:15:28.605 "filename": "/dev/nvme0n1", 00:15:28.605 "name": "xnvme_bdev" 00:15:28.605 }, 00:15:28.605 "method": "bdev_xnvme_create" 00:15:28.605 }, 00:15:28.605 { 00:15:28.605 "method": "bdev_wait_for_examine" 00:15:28.605 } 00:15:28.605 ] 00:15:28.605 } 00:15:28.605 ] 00:15:28.605 } 00:15:28.864 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:28.864 fio-3.35 00:15:28.864 Starting 1 thread 00:15:35.443 00:15:35.443 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71102: Sat Dec 7 04:00:17 2024 00:15:35.443 read: IOPS=43.0k, BW=168MiB/s (176MB/s)(839MiB/5001msec) 00:15:35.443 slat (usec): min=4, max=763, avg=20.47, stdev=22.41 00:15:35.443 clat (usec): min=56, max=6023, avg=868.52, stdev=555.64 00:15:35.443 lat (usec): min=92, max=6093, avg=888.99, stdev=559.98 00:15:35.443 clat percentiles (usec): 00:15:35.443 | 1.00th=[ 172], 5.00th=[ 243], 10.00th=[ 314], 20.00th=[ 441], 00:15:35.443 | 30.00th=[ 562], 40.00th=[ 676], 50.00th=[ 791], 60.00th=[ 906], 00:15:35.443 | 70.00th=[ 1029], 80.00th=[ 1172], 90.00th=[ 1401], 95.00th=[ 1713], 00:15:35.443 | 99.00th=[ 3228], 99.50th=[ 3818], 99.90th=[ 4490], 99.95th=[ 4752], 00:15:35.443 | 99.99th=[ 5080] 00:15:35.443 bw ( KiB/s): min=163928, max=183584, per=100.00%, avg=172105.67, stdev=6283.22, samples=9 00:15:35.443 iops : min=40982, max=45896, avg=43026.33, stdev=1570.87, samples=9 00:15:35.443 lat (usec) : 100=0.03%, 250=5.51%, 500=19.51%, 750=21.59%, 1000=21.43% 00:15:35.443 lat (msec) : 2=28.32%, 4=3.26%, 10=0.36% 00:15:35.443 cpu : usr=24.98%, sys=51.26%, ctx=104, majf=0, minf=764 00:15:35.443 IO depths : 1=0.1%, 2=1.2%, 4=4.4%, 8=11.3%, 16=26.1%, 32=55.2%, >=64=1.7% 00:15:35.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:35.443 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:15:35.443 issued rwts: total=214909,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:35.443 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:35.443 00:15:35.443 Run status group 0 (all jobs): 00:15:35.443 READ: bw=168MiB/s (176MB/s), 168MiB/s-168MiB/s (176MB/s-176MB/s), io=839MiB (880MB), run=5001-5001msec 00:15:36.012 ----------------------------------------------------- 00:15:36.012 Suppressions used: 00:15:36.012 count bytes template 00:15:36.012 1 11 /usr/src/fio/parse.c 00:15:36.012 1 8 libtcmalloc_minimal.so 00:15:36.012 1 904 libcrypto.so 00:15:36.012 ----------------------------------------------------- 00:15:36.012 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:36.012 04:00:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:36.012 { 00:15:36.012 "subsystems": [ 00:15:36.012 { 00:15:36.012 "subsystem": "bdev", 00:15:36.012 "config": [ 00:15:36.012 { 00:15:36.012 "params": { 00:15:36.012 "io_mechanism": "libaio", 00:15:36.012 "conserve_cpu": true, 00:15:36.012 "filename": "/dev/nvme0n1", 00:15:36.012 "name": "xnvme_bdev" 00:15:36.012 }, 00:15:36.012 "method": "bdev_xnvme_create" 00:15:36.012 }, 00:15:36.012 { 00:15:36.012 "method": "bdev_wait_for_examine" 00:15:36.012 } 00:15:36.012 ] 00:15:36.012 } 00:15:36.012 ] 00:15:36.012 } 00:15:36.272 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:36.272 fio-3.35 00:15:36.272 Starting 1 thread 00:15:42.941 00:15:42.941 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71195: Sat Dec 7 04:00:24 2024 00:15:42.941 write: IOPS=38.6k, BW=151MiB/s (158MB/s)(755MiB/5001msec); 0 zone resets 00:15:42.941 slat (usec): min=4, max=1184, avg=21.92, stdev=34.00 00:15:42.941 clat (usec): min=6, max=16817, avg=1018.83, stdev=948.55 00:15:42.941 lat (usec): min=72, max=16822, avg=1040.75, stdev=951.41 00:15:42.941 clat percentiles (usec): 00:15:42.941 | 1.00th=[ 182], 5.00th=[ 285], 10.00th=[ 367], 20.00th=[ 498], 00:15:42.941 | 30.00th=[ 611], 40.00th=[ 725], 50.00th=[ 840], 60.00th=[ 955], 00:15:42.941 | 70.00th=[ 1090], 80.00th=[ 1287], 90.00th=[ 1680], 95.00th=[ 2311], 00:15:42.941 | 99.00th=[ 4293], 99.50th=[ 6849], 99.90th=[12387], 99.95th=[14484], 00:15:42.941 | 99.99th=[16188] 00:15:42.941 bw ( KiB/s): min=118912, max=195016, per=100.00%, avg=155296.00, stdev=25054.37, samples=9 00:15:42.941 iops : min=29728, max=48754, avg=38824.00, stdev=6263.59, samples=9 00:15:42.941 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=0.12%, 250=3.14% 00:15:42.941 lat (usec) : 500=17.07%, 750=21.91%, 1000=21.15% 00:15:42.941 lat (msec) : 2=30.04%, 4=5.31%, 10=1.04%, 20=0.20% 00:15:42.941 cpu : usr=26.40%, sys=50.44%, ctx=147, majf=0, minf=765 00:15:42.941 IO depths : 1=0.1%, 2=0.8%, 4=3.6%, 8=10.4%, 16=24.9%, 32=57.7%, >=64=2.5% 00:15:42.941 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.941 complete : 0=0.0%, 4=98.1%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:15:42.941 issued rwts: total=0,193182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.941 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:42.941 00:15:42.941 Run status group 0 (all jobs): 00:15:42.941 WRITE: bw=151MiB/s (158MB/s), 151MiB/s-151MiB/s (158MB/s-158MB/s), io=755MiB (791MB), run=5001-5001msec 00:15:43.200 ----------------------------------------------------- 00:15:43.200 Suppressions used: 00:15:43.200 count bytes template 00:15:43.200 1 11 /usr/src/fio/parse.c 00:15:43.200 1 8 libtcmalloc_minimal.so 00:15:43.200 1 904 libcrypto.so 00:15:43.200 ----------------------------------------------------- 00:15:43.200 00:15:43.200 00:15:43.200 real 0m14.743s 00:15:43.200 user 0m6.208s 00:15:43.200 sys 0m5.837s 00:15:43.200 04:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.200 04:00:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:43.200 ************************************ 00:15:43.200 END TEST xnvme_fio_plugin 00:15:43.200 ************************************ 00:15:43.200 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:43.200 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:15:43.200 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:15:43.200 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:15:43.200 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:43.200 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:43.201 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:43.201 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:43.201 04:00:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:43.201 04:00:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:43.201 04:00:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.201 04:00:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.201 ************************************ 00:15:43.201 START TEST xnvme_rpc 00:15:43.201 ************************************ 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71281 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71281 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71281 ']' 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:43.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.201 04:00:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.459 [2024-12-07 04:00:26.029578] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:15:43.459 [2024-12-07 04:00:26.029733] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71281 ] 00:15:43.719 [2024-12-07 04:00:26.211900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.719 [2024-12-07 04:00:26.317577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.653 xnvme_bdev 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.653 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:44.969 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71281 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71281 ']' 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71281 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71281 00:15:44.970 killing process with pid 71281 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71281' 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71281 00:15:44.970 04:00:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71281 00:15:47.504 00:15:47.504 real 0m4.050s 00:15:47.504 user 0m4.076s 00:15:47.504 sys 0m0.547s 00:15:47.504 04:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.504 04:00:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:47.504 ************************************ 00:15:47.504 END TEST xnvme_rpc 00:15:47.504 ************************************ 00:15:47.504 04:00:30 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:47.504 04:00:30 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:47.504 04:00:30 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.504 04:00:30 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:47.504 ************************************ 00:15:47.504 START TEST xnvme_bdevperf 00:15:47.505 ************************************ 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:47.505 04:00:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:47.505 { 00:15:47.505 "subsystems": [ 00:15:47.505 { 00:15:47.505 "subsystem": "bdev", 00:15:47.505 "config": [ 00:15:47.505 { 00:15:47.505 "params": { 00:15:47.505 "io_mechanism": "io_uring", 00:15:47.505 "conserve_cpu": false, 00:15:47.505 "filename": "/dev/nvme0n1", 00:15:47.505 "name": "xnvme_bdev" 00:15:47.505 }, 00:15:47.505 "method": "bdev_xnvme_create" 00:15:47.505 }, 00:15:47.505 { 00:15:47.505 "method": "bdev_wait_for_examine" 00:15:47.505 } 00:15:47.505 ] 00:15:47.505 } 00:15:47.505 ] 00:15:47.505 } 00:15:47.505 [2024-12-07 04:00:30.144618] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:15:47.505 [2024-12-07 04:00:30.144764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71366 ] 00:15:47.765 [2024-12-07 04:00:30.326845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.765 [2024-12-07 04:00:30.457756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.332 Running I/O for 5 seconds... 00:15:50.197 38694.00 IOPS, 151.15 MiB/s [2024-12-07T04:00:34.304Z] 37550.00 IOPS, 146.68 MiB/s [2024-12-07T04:00:35.235Z] 37455.33 IOPS, 146.31 MiB/s [2024-12-07T04:00:36.164Z] 38161.50 IOPS, 149.07 MiB/s 00:15:53.428 Latency(us) 00:15:53.428 [2024-12-07T04:00:36.164Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.428 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:53.428 xnvme_bdev : 5.00 39433.17 154.04 0.00 0.00 1617.34 371.77 7264.23 00:15:53.428 [2024-12-07T04:00:36.164Z] =================================================================================================================== 00:15:53.428 [2024-12-07T04:00:36.164Z] Total : 39433.17 154.04 0.00 0.00 1617.34 371.77 7264.23 00:15:54.362 04:00:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:54.362 04:00:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:54.362 04:00:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:54.362 04:00:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:54.362 04:00:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:54.362 { 00:15:54.362 "subsystems": [ 00:15:54.362 { 00:15:54.362 "subsystem": "bdev", 00:15:54.362 "config": [ 00:15:54.362 { 00:15:54.362 "params": { 00:15:54.362 "io_mechanism": "io_uring", 00:15:54.362 "conserve_cpu": false, 00:15:54.362 "filename": "/dev/nvme0n1", 00:15:54.362 "name": "xnvme_bdev" 00:15:54.362 }, 00:15:54.362 "method": "bdev_xnvme_create" 00:15:54.362 }, 00:15:54.362 { 00:15:54.362 "method": "bdev_wait_for_examine" 00:15:54.362 } 00:15:54.362 ] 00:15:54.362 } 00:15:54.362 ] 00:15:54.362 } 00:15:54.621 [2024-12-07 04:00:37.109092] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:15:54.621 [2024-12-07 04:00:37.109198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71447 ] 00:15:54.621 [2024-12-07 04:00:37.289138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.881 [2024-12-07 04:00:37.416833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.140 Running I/O for 5 seconds... 00:15:57.461 30276.00 IOPS, 118.27 MiB/s [2024-12-07T04:00:41.156Z] 25890.00 IOPS, 101.13 MiB/s [2024-12-07T04:00:42.096Z] 24737.00 IOPS, 96.63 MiB/s [2024-12-07T04:00:43.036Z] 25664.75 IOPS, 100.25 MiB/s [2024-12-07T04:00:43.036Z] 25945.20 IOPS, 101.35 MiB/s 00:16:00.300 Latency(us) 00:16:00.300 [2024-12-07T04:00:43.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.300 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:00.300 xnvme_bdev : 5.01 25912.65 101.22 0.00 0.00 2461.90 605.35 8001.18 00:16:00.300 [2024-12-07T04:00:43.036Z] =================================================================================================================== 00:16:00.300 [2024-12-07T04:00:43.036Z] Total : 25912.65 101.22 0.00 0.00 2461.90 605.35 8001.18 00:16:01.681 00:16:01.681 real 0m13.968s 00:16:01.681 user 0m7.052s 00:16:01.681 sys 0m6.664s 00:16:01.681 04:00:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.681 ************************************ 00:16:01.681 END TEST xnvme_bdevperf 00:16:01.681 ************************************ 00:16:01.681 04:00:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 04:00:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:01.681 04:00:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:01.681 04:00:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.681 04:00:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 ************************************ 00:16:01.681 START TEST xnvme_fio_plugin 00:16:01.681 ************************************ 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:01.681 04:00:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:01.681 { 00:16:01.681 "subsystems": [ 00:16:01.681 { 00:16:01.681 "subsystem": "bdev", 00:16:01.681 "config": [ 00:16:01.681 { 00:16:01.681 "params": { 00:16:01.681 "io_mechanism": "io_uring", 00:16:01.681 "conserve_cpu": false, 00:16:01.681 "filename": "/dev/nvme0n1", 00:16:01.681 "name": "xnvme_bdev" 00:16:01.681 }, 00:16:01.681 "method": "bdev_xnvme_create" 00:16:01.681 }, 00:16:01.681 { 00:16:01.681 "method": "bdev_wait_for_examine" 00:16:01.681 } 00:16:01.681 ] 00:16:01.681 } 00:16:01.682 ] 00:16:01.682 } 00:16:01.682 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:01.682 fio-3.35 00:16:01.682 Starting 1 thread 00:16:08.263 00:16:08.263 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71572: Sat Dec 7 04:00:50 2024 00:16:08.263 read: IOPS=23.8k, BW=93.2MiB/s (97.7MB/s)(466MiB/5002msec) 00:16:08.263 slat (usec): min=2, max=107, avg= 7.64, stdev= 3.77 00:16:08.263 clat (usec): min=1286, max=5311, avg=2374.61, stdev=353.80 00:16:08.263 lat (usec): min=1289, max=5315, avg=2382.26, stdev=355.42 00:16:08.263 clat percentiles (usec): 00:16:08.263 | 1.00th=[ 1483], 5.00th=[ 1663], 10.00th=[ 1827], 20.00th=[ 2073], 00:16:08.263 | 30.00th=[ 2245], 40.00th=[ 2343], 50.00th=[ 2442], 60.00th=[ 2507], 00:16:08.263 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2769], 95.00th=[ 2835], 00:16:08.263 | 99.00th=[ 2933], 99.50th=[ 2999], 99.90th=[ 3097], 99.95th=[ 3195], 00:16:08.263 | 99.99th=[ 3294] 00:16:08.263 bw ( KiB/s): min=86016, max=103936, per=97.14%, avg=92671.11, stdev=5757.42, samples=9 00:16:08.263 iops : min=21504, max=25984, avg=23167.78, stdev=1439.36, samples=9 00:16:08.263 lat (msec) : 2=16.72%, 4=83.28%, 10=0.01% 00:16:08.263 cpu : usr=35.85%, sys=62.51%, ctx=12, majf=0, minf=762 00:16:08.263 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:08.263 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:08.263 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:08.263 issued rwts: total=119295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:08.263 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:08.263 00:16:08.263 Run status group 0 (all jobs): 00:16:08.263 READ: bw=93.2MiB/s (97.7MB/s), 93.2MiB/s-93.2MiB/s (97.7MB/s-97.7MB/s), io=466MiB (489MB), run=5002-5002msec 00:16:09.198 ----------------------------------------------------- 00:16:09.198 Suppressions used: 00:16:09.198 count bytes template 00:16:09.198 1 11 /usr/src/fio/parse.c 00:16:09.198 1 8 libtcmalloc_minimal.so 00:16:09.198 1 904 libcrypto.so 00:16:09.198 ----------------------------------------------------- 00:16:09.198 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:09.198 04:00:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:09.198 { 00:16:09.198 "subsystems": [ 00:16:09.198 { 00:16:09.198 "subsystem": "bdev", 00:16:09.198 "config": [ 00:16:09.198 { 00:16:09.198 "params": { 00:16:09.198 "io_mechanism": "io_uring", 00:16:09.198 "conserve_cpu": false, 00:16:09.198 "filename": "/dev/nvme0n1", 00:16:09.198 "name": "xnvme_bdev" 00:16:09.198 }, 00:16:09.198 "method": "bdev_xnvme_create" 00:16:09.198 }, 00:16:09.198 { 00:16:09.198 "method": "bdev_wait_for_examine" 00:16:09.198 } 00:16:09.198 ] 00:16:09.198 } 00:16:09.198 ] 00:16:09.198 } 00:16:09.198 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:09.198 fio-3.35 00:16:09.198 Starting 1 thread 00:16:15.793 00:16:15.793 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71669: Sat Dec 7 04:00:57 2024 00:16:15.793 write: IOPS=22.5k, BW=88.0MiB/s (92.3MB/s)(440MiB/5001msec); 0 zone resets 00:16:15.793 slat (usec): min=2, max=190, avg= 8.96, stdev= 4.14 00:16:15.793 clat (usec): min=1278, max=6052, avg=2485.72, stdev=327.57 00:16:15.793 lat (usec): min=1283, max=6060, avg=2494.68, stdev=329.06 00:16:15.793 clat percentiles (usec): 00:16:15.793 | 1.00th=[ 1500], 5.00th=[ 1713], 10.00th=[ 2040], 20.00th=[ 2311], 00:16:15.793 | 30.00th=[ 2376], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:16:15.793 | 70.00th=[ 2671], 80.00th=[ 2769], 90.00th=[ 2835], 95.00th=[ 2900], 00:16:15.793 | 99.00th=[ 2966], 99.50th=[ 2999], 99.90th=[ 3130], 99.95th=[ 3195], 00:16:15.793 | 99.99th=[ 3359] 00:16:15.793 bw ( KiB/s): min=84992, max=107520, per=100.00%, avg=90623.11, stdev=8894.33, samples=9 00:16:15.793 iops : min=21248, max=26880, avg=22655.78, stdev=2223.58, samples=9 00:16:15.793 lat (msec) : 2=9.49%, 4=90.51%, 10=0.01% 00:16:15.793 cpu : usr=39.34%, sys=58.90%, ctx=15, majf=0, minf=763 00:16:15.793 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:15.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:15.793 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:15.793 issued rwts: total=0,112639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:15.793 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:15.793 00:16:15.793 Run status group 0 (all jobs): 00:16:15.793 WRITE: bw=88.0MiB/s (92.3MB/s), 88.0MiB/s-88.0MiB/s (92.3MB/s-92.3MB/s), io=440MiB (461MB), run=5001-5001msec 00:16:16.363 ----------------------------------------------------- 00:16:16.363 Suppressions used: 00:16:16.363 count bytes template 00:16:16.363 1 11 /usr/src/fio/parse.c 00:16:16.363 1 8 libtcmalloc_minimal.so 00:16:16.363 1 904 libcrypto.so 00:16:16.363 ----------------------------------------------------- 00:16:16.363 00:16:16.363 00:16:16.363 real 0m14.839s 00:16:16.363 user 0m7.640s 00:16:16.363 sys 0m6.778s 00:16:16.363 04:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.363 04:00:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:16.363 ************************************ 00:16:16.363 END TEST xnvme_fio_plugin 00:16:16.363 ************************************ 00:16:16.363 04:00:58 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:16.363 04:00:58 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:16.363 04:00:58 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:16.363 04:00:58 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:16.363 04:00:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:16.363 04:00:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.363 04:00:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:16.363 ************************************ 00:16:16.363 START TEST xnvme_rpc 00:16:16.363 ************************************ 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71750 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71750 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71750 ']' 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.363 04:00:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:16.623 [2024-12-07 04:00:59.136640] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:16:16.623 [2024-12-07 04:00:59.137698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71750 ] 00:16:16.623 [2024-12-07 04:00:59.341159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.883 [2024-12-07 04:00:59.450128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.822 xnvme_bdev 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71750 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71750 ']' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71750 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71750 00:16:17.822 killing process with pid 71750 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71750' 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71750 00:16:17.822 04:01:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71750 00:16:20.365 ************************************ 00:16:20.365 END TEST xnvme_rpc 00:16:20.365 ************************************ 00:16:20.365 00:16:20.365 real 0m3.806s 00:16:20.365 user 0m3.818s 00:16:20.365 sys 0m0.572s 00:16:20.365 04:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.365 04:01:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:20.365 04:01:02 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:20.365 04:01:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:20.365 04:01:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.365 04:01:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:20.365 ************************************ 00:16:20.365 START TEST xnvme_bdevperf 00:16:20.365 ************************************ 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:20.365 04:01:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:20.365 { 00:16:20.365 "subsystems": [ 00:16:20.365 { 00:16:20.365 "subsystem": "bdev", 00:16:20.365 "config": [ 00:16:20.365 { 00:16:20.365 "params": { 00:16:20.365 "io_mechanism": "io_uring", 00:16:20.365 "conserve_cpu": true, 00:16:20.365 "filename": "/dev/nvme0n1", 00:16:20.365 "name": "xnvme_bdev" 00:16:20.365 }, 00:16:20.365 "method": "bdev_xnvme_create" 00:16:20.365 }, 00:16:20.365 { 00:16:20.365 "method": "bdev_wait_for_examine" 00:16:20.365 } 00:16:20.365 ] 00:16:20.365 } 00:16:20.365 ] 00:16:20.365 } 00:16:20.365 [2024-12-07 04:01:02.984466] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:16:20.365 [2024-12-07 04:01:02.984753] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71835 ] 00:16:20.649 [2024-12-07 04:01:03.163324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.649 [2024-12-07 04:01:03.272285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.960 Running I/O for 5 seconds... 00:16:23.264 65664.00 IOPS, 256.50 MiB/s [2024-12-07T04:01:06.929Z] 65502.00 IOPS, 255.87 MiB/s [2024-12-07T04:01:07.868Z] 64681.33 IOPS, 252.66 MiB/s [2024-12-07T04:01:08.797Z] 63614.75 IOPS, 248.50 MiB/s 00:16:26.061 Latency(us) 00:16:26.061 [2024-12-07T04:01:08.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.061 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:26.061 xnvme_bdev : 5.00 63359.64 247.50 0.00 0.00 1006.93 121.73 3921.63 00:16:26.061 [2024-12-07T04:01:08.797Z] =================================================================================================================== 00:16:26.061 [2024-12-07T04:01:08.797Z] Total : 63359.64 247.50 0.00 0.00 1006.93 121.73 3921.63 00:16:26.995 04:01:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:26.995 04:01:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:26.995 04:01:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:26.995 04:01:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:26.995 04:01:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:27.254 { 00:16:27.254 "subsystems": [ 00:16:27.254 { 00:16:27.254 "subsystem": "bdev", 00:16:27.254 "config": [ 00:16:27.254 { 00:16:27.254 "params": { 00:16:27.254 "io_mechanism": "io_uring", 00:16:27.254 "conserve_cpu": true, 00:16:27.254 "filename": "/dev/nvme0n1", 00:16:27.254 "name": "xnvme_bdev" 00:16:27.254 }, 00:16:27.254 "method": "bdev_xnvme_create" 00:16:27.255 }, 00:16:27.255 { 00:16:27.255 "method": "bdev_wait_for_examine" 00:16:27.255 } 00:16:27.255 ] 00:16:27.255 } 00:16:27.255 ] 00:16:27.255 } 00:16:27.255 [2024-12-07 04:01:09.774060] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:16:27.255 [2024-12-07 04:01:09.774166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71911 ] 00:16:27.255 [2024-12-07 04:01:09.952153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:27.514 [2024-12-07 04:01:10.064453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.776 Running I/O for 5 seconds... 00:16:30.093 23488.00 IOPS, 91.75 MiB/s [2024-12-07T04:01:13.769Z] 22912.00 IOPS, 89.50 MiB/s [2024-12-07T04:01:14.723Z] 23594.67 IOPS, 92.17 MiB/s [2024-12-07T04:01:15.666Z] 23488.00 IOPS, 91.75 MiB/s 00:16:32.930 Latency(us) 00:16:32.930 [2024-12-07T04:01:15.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:32.930 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:32.930 xnvme_bdev : 5.01 23367.81 91.28 0.00 0.00 2729.93 1013.31 8422.30 00:16:32.930 [2024-12-07T04:01:15.666Z] =================================================================================================================== 00:16:32.930 [2024-12-07T04:01:15.666Z] Total : 23367.81 91.28 0.00 0.00 2729.93 1013.31 8422.30 00:16:33.869 00:16:33.869 real 0m13.605s 00:16:33.869 user 0m7.296s 00:16:33.869 sys 0m5.774s 00:16:33.869 ************************************ 00:16:33.869 END TEST xnvme_bdevperf 00:16:33.869 ************************************ 00:16:33.869 04:01:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:33.869 04:01:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:33.869 04:01:16 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:33.869 04:01:16 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:33.869 04:01:16 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:33.869 04:01:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:33.869 ************************************ 00:16:33.869 START TEST xnvme_fio_plugin 00:16:33.869 ************************************ 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:33.869 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:34.129 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.129 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.129 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:34.129 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:34.129 04:01:16 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:34.129 { 00:16:34.129 "subsystems": [ 00:16:34.129 { 00:16:34.129 "subsystem": "bdev", 00:16:34.129 "config": [ 00:16:34.129 { 00:16:34.129 "params": { 00:16:34.129 "io_mechanism": "io_uring", 00:16:34.129 "conserve_cpu": true, 00:16:34.129 "filename": "/dev/nvme0n1", 00:16:34.129 "name": "xnvme_bdev" 00:16:34.129 }, 00:16:34.129 "method": "bdev_xnvme_create" 00:16:34.129 }, 00:16:34.129 { 00:16:34.129 "method": "bdev_wait_for_examine" 00:16:34.129 } 00:16:34.129 ] 00:16:34.129 } 00:16:34.129 ] 00:16:34.129 } 00:16:34.129 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:34.129 fio-3.35 00:16:34.129 Starting 1 thread 00:16:40.700 00:16:40.700 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72035: Sat Dec 7 04:01:22 2024 00:16:40.700 read: IOPS=23.3k, BW=90.9MiB/s (95.3MB/s)(455MiB/5001msec) 00:16:40.700 slat (usec): min=3, max=103, avg= 8.25, stdev= 3.68 00:16:40.700 clat (usec): min=1481, max=4085, avg=2417.28, stdev=286.75 00:16:40.700 lat (usec): min=1485, max=4098, avg=2425.53, stdev=287.93 00:16:40.700 clat percentiles (usec): 00:16:40.700 | 1.00th=[ 1680], 5.00th=[ 1860], 10.00th=[ 2008], 20.00th=[ 2212], 00:16:40.700 | 30.00th=[ 2311], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:16:40.700 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2769], 95.00th=[ 2835], 00:16:40.700 | 99.00th=[ 2900], 99.50th=[ 2966], 99.90th=[ 3228], 99.95th=[ 3556], 00:16:40.700 | 99.99th=[ 3982] 00:16:40.700 bw ( KiB/s): min=87552, max=100352, per=100.00%, avg=93696.00, stdev=4622.20, samples=9 00:16:40.700 iops : min=21888, max=25088, avg=23424.00, stdev=1155.55, samples=9 00:16:40.700 lat (msec) : 2=9.57%, 4=90.42%, 10=0.01% 00:16:40.700 cpu : usr=39.88%, sys=54.96%, ctx=14, majf=0, minf=762 00:16:40.700 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:40.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.700 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:40.700 issued rwts: total=116352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.700 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.700 00:16:40.700 Run status group 0 (all jobs): 00:16:40.700 READ: bw=90.9MiB/s (95.3MB/s), 90.9MiB/s-90.9MiB/s (95.3MB/s-95.3MB/s), io=455MiB (477MB), run=5001-5001msec 00:16:41.268 ----------------------------------------------------- 00:16:41.268 Suppressions used: 00:16:41.268 count bytes template 00:16:41.268 1 11 /usr/src/fio/parse.c 00:16:41.268 1 8 libtcmalloc_minimal.so 00:16:41.268 1 904 libcrypto.so 00:16:41.268 ----------------------------------------------------- 00:16:41.268 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:41.268 { 00:16:41.268 "subsystems": [ 00:16:41.268 { 00:16:41.268 "subsystem": "bdev", 00:16:41.268 "config": [ 00:16:41.268 { 00:16:41.268 "params": { 00:16:41.268 "io_mechanism": "io_uring", 00:16:41.268 "conserve_cpu": true, 00:16:41.268 "filename": "/dev/nvme0n1", 00:16:41.268 "name": "xnvme_bdev" 00:16:41.268 }, 00:16:41.268 "method": "bdev_xnvme_create" 00:16:41.268 }, 00:16:41.268 { 00:16:41.268 "method": "bdev_wait_for_examine" 00:16:41.268 } 00:16:41.268 ] 00:16:41.268 } 00:16:41.268 ] 00:16:41.268 } 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:41.268 04:01:23 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:41.529 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:41.529 fio-3.35 00:16:41.529 Starting 1 thread 00:16:48.157 00:16:48.157 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72128: Sat Dec 7 04:01:29 2024 00:16:48.157 write: IOPS=23.3k, BW=91.1MiB/s (95.6MB/s)(456MiB/5001msec); 0 zone resets 00:16:48.157 slat (nsec): min=3799, max=81418, avg=8457.15, stdev=3513.12 00:16:48.157 clat (usec): min=1363, max=4694, avg=2405.60, stdev=275.29 00:16:48.157 lat (usec): min=1367, max=4708, avg=2414.06, stdev=276.37 00:16:48.157 clat percentiles (usec): 00:16:48.157 | 1.00th=[ 1614], 5.00th=[ 1893], 10.00th=[ 2057], 20.00th=[ 2212], 00:16:48.157 | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2409], 60.00th=[ 2507], 00:16:48.157 | 70.00th=[ 2573], 80.00th=[ 2638], 90.00th=[ 2737], 95.00th=[ 2802], 00:16:48.157 | 99.00th=[ 2900], 99.50th=[ 2900], 99.90th=[ 2999], 99.95th=[ 4146], 00:16:48.157 | 99.99th=[ 4621] 00:16:48.157 bw ( KiB/s): min=89088, max=102912, per=100.00%, avg=93639.11, stdev=4751.16, samples=9 00:16:48.157 iops : min=22272, max=25728, avg=23409.78, stdev=1187.79, samples=9 00:16:48.157 lat (msec) : 2=7.82%, 4=92.13%, 10=0.05% 00:16:48.157 cpu : usr=41.16%, sys=54.00%, ctx=8, majf=0, minf=763 00:16:48.157 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:48.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:48.157 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:48.157 issued rwts: total=0,116672,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:48.157 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:48.157 00:16:48.157 Run status group 0 (all jobs): 00:16:48.157 WRITE: bw=91.1MiB/s (95.6MB/s), 91.1MiB/s-91.1MiB/s (95.6MB/s-95.6MB/s), io=456MiB (478MB), run=5001-5001msec 00:16:48.726 ----------------------------------------------------- 00:16:48.726 Suppressions used: 00:16:48.726 count bytes template 00:16:48.726 1 11 /usr/src/fio/parse.c 00:16:48.726 1 8 libtcmalloc_minimal.so 00:16:48.726 1 904 libcrypto.so 00:16:48.726 ----------------------------------------------------- 00:16:48.727 00:16:48.727 ************************************ 00:16:48.727 END TEST xnvme_fio_plugin 00:16:48.727 ************************************ 00:16:48.727 00:16:48.727 real 0m14.724s 00:16:48.727 user 0m7.860s 00:16:48.727 sys 0m6.119s 00:16:48.727 04:01:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:48.727 04:01:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:16:48.727 04:01:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:48.727 04:01:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:48.727 04:01:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:48.727 04:01:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:48.727 ************************************ 00:16:48.727 START TEST xnvme_rpc 00:16:48.727 ************************************ 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72214 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72214 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72214 ']' 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:48.727 04:01:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:48.986 [2024-12-07 04:01:31.478102] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:16:48.986 [2024-12-07 04:01:31.478370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72214 ] 00:16:48.986 [2024-12-07 04:01:31.657383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.246 [2024-12-07 04:01:31.761904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.185 xnvme_bdev 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72214 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72214 ']' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72214 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72214 00:16:50.185 killing process with pid 72214 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72214' 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72214 00:16:50.185 04:01:32 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72214 00:16:52.731 00:16:52.731 real 0m3.751s 00:16:52.731 user 0m3.779s 00:16:52.731 sys 0m0.570s 00:16:52.731 04:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:52.731 ************************************ 00:16:52.731 END TEST xnvme_rpc 00:16:52.731 ************************************ 00:16:52.731 04:01:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.731 04:01:35 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:52.731 04:01:35 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:52.731 04:01:35 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:52.731 04:01:35 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:52.731 ************************************ 00:16:52.731 START TEST xnvme_bdevperf 00:16:52.731 ************************************ 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:52.731 04:01:35 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:52.731 { 00:16:52.731 "subsystems": [ 00:16:52.731 { 00:16:52.731 "subsystem": "bdev", 00:16:52.731 "config": [ 00:16:52.731 { 00:16:52.731 "params": { 00:16:52.731 "io_mechanism": "io_uring_cmd", 00:16:52.731 "conserve_cpu": false, 00:16:52.731 "filename": "/dev/ng0n1", 00:16:52.731 "name": "xnvme_bdev" 00:16:52.731 }, 00:16:52.731 "method": "bdev_xnvme_create" 00:16:52.731 }, 00:16:52.731 { 00:16:52.731 "method": "bdev_wait_for_examine" 00:16:52.731 } 00:16:52.731 ] 00:16:52.731 } 00:16:52.731 ] 00:16:52.731 } 00:16:52.731 [2024-12-07 04:01:35.297121] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:16:52.731 [2024-12-07 04:01:35.297237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72294 ] 00:16:52.989 [2024-12-07 04:01:35.479905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:52.989 [2024-12-07 04:01:35.581651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.246 Running I/O for 5 seconds... 00:16:55.550 58112.00 IOPS, 227.00 MiB/s [2024-12-07T04:01:39.219Z] 57087.50 IOPS, 223.00 MiB/s [2024-12-07T04:01:40.152Z] 53717.00 IOPS, 209.83 MiB/s [2024-12-07T04:01:41.085Z] 54495.50 IOPS, 212.87 MiB/s [2024-12-07T04:01:41.085Z] 54346.40 IOPS, 212.29 MiB/s 00:16:58.349 Latency(us) 00:16:58.349 [2024-12-07T04:01:41.085Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.349 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:58.349 xnvme_bdev : 5.00 54333.16 212.24 0.00 0.00 1174.64 733.66 4342.75 00:16:58.349 [2024-12-07T04:01:41.085Z] =================================================================================================================== 00:16:58.349 [2024-12-07T04:01:41.085Z] Total : 54333.16 212.24 0.00 0.00 1174.64 733.66 4342.75 00:16:59.337 04:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:59.337 04:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:59.337 04:01:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:59.337 04:01:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:59.337 04:01:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:59.337 { 00:16:59.337 "subsystems": [ 00:16:59.337 { 00:16:59.337 "subsystem": "bdev", 00:16:59.337 "config": [ 00:16:59.337 { 00:16:59.337 "params": { 00:16:59.337 "io_mechanism": "io_uring_cmd", 00:16:59.337 "conserve_cpu": false, 00:16:59.337 "filename": "/dev/ng0n1", 00:16:59.337 "name": "xnvme_bdev" 00:16:59.337 }, 00:16:59.337 "method": "bdev_xnvme_create" 00:16:59.337 }, 00:16:59.337 { 00:16:59.337 "method": "bdev_wait_for_examine" 00:16:59.337 } 00:16:59.337 ] 00:16:59.337 } 00:16:59.337 ] 00:16:59.337 } 00:16:59.601 [2024-12-07 04:01:42.073707] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:16:59.601 [2024-12-07 04:01:42.073823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72374 ] 00:16:59.601 [2024-12-07 04:01:42.253684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.861 [2024-12-07 04:01:42.361439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.120 Running I/O for 5 seconds... 00:17:02.005 28480.00 IOPS, 111.25 MiB/s [2024-12-07T04:01:46.124Z] 25632.00 IOPS, 100.12 MiB/s [2024-12-07T04:01:47.060Z] 24725.33 IOPS, 96.58 MiB/s [2024-12-07T04:01:47.997Z] 24272.00 IOPS, 94.81 MiB/s 00:17:05.261 Latency(us) 00:17:05.261 [2024-12-07T04:01:47.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.261 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:05.261 xnvme_bdev : 5.01 23957.67 93.58 0.00 0.00 2662.39 1204.13 7685.35 00:17:05.261 [2024-12-07T04:01:47.997Z] =================================================================================================================== 00:17:05.261 [2024-12-07T04:01:47.997Z] Total : 23957.67 93.58 0.00 0.00 2662.39 1204.13 7685.35 00:17:06.198 04:01:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:06.198 04:01:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:06.198 04:01:48 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:06.198 04:01:48 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:06.198 04:01:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:06.198 { 00:17:06.198 "subsystems": [ 00:17:06.198 { 00:17:06.198 "subsystem": "bdev", 00:17:06.198 "config": [ 00:17:06.198 { 00:17:06.198 "params": { 00:17:06.198 "io_mechanism": "io_uring_cmd", 00:17:06.198 "conserve_cpu": false, 00:17:06.198 "filename": "/dev/ng0n1", 00:17:06.198 "name": "xnvme_bdev" 00:17:06.198 }, 00:17:06.198 "method": "bdev_xnvme_create" 00:17:06.198 }, 00:17:06.198 { 00:17:06.198 "method": "bdev_wait_for_examine" 00:17:06.198 } 00:17:06.198 ] 00:17:06.198 } 00:17:06.198 ] 00:17:06.198 } 00:17:06.199 [2024-12-07 04:01:48.886809] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:06.199 [2024-12-07 04:01:48.886951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72448 ] 00:17:06.457 [2024-12-07 04:01:49.070271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.457 [2024-12-07 04:01:49.178335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.024 Running I/O for 5 seconds... 00:17:08.899 72704.00 IOPS, 284.00 MiB/s [2024-12-07T04:01:52.573Z] 72704.00 IOPS, 284.00 MiB/s [2024-12-07T04:01:53.512Z] 72640.00 IOPS, 283.75 MiB/s [2024-12-07T04:01:54.892Z] 72704.00 IOPS, 284.00 MiB/s [2024-12-07T04:01:54.892Z] 72780.80 IOPS, 284.30 MiB/s 00:17:12.156 Latency(us) 00:17:12.156 [2024-12-07T04:01:54.892Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.156 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:12.156 xnvme_bdev : 5.00 72768.31 284.25 0.00 0.00 876.88 677.73 2447.73 00:17:12.156 [2024-12-07T04:01:54.892Z] =================================================================================================================== 00:17:12.156 [2024-12-07T04:01:54.892Z] Total : 72768.31 284.25 0.00 0.00 876.88 677.73 2447.73 00:17:13.094 04:01:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:13.094 04:01:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:13.094 04:01:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:13.094 04:01:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:13.094 04:01:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:13.094 { 00:17:13.094 "subsystems": [ 00:17:13.094 { 00:17:13.094 "subsystem": "bdev", 00:17:13.094 "config": [ 00:17:13.094 { 00:17:13.094 "params": { 00:17:13.094 "io_mechanism": "io_uring_cmd", 00:17:13.094 "conserve_cpu": false, 00:17:13.094 "filename": "/dev/ng0n1", 00:17:13.094 "name": "xnvme_bdev" 00:17:13.094 }, 00:17:13.094 "method": "bdev_xnvme_create" 00:17:13.094 }, 00:17:13.094 { 00:17:13.094 "method": "bdev_wait_for_examine" 00:17:13.094 } 00:17:13.094 ] 00:17:13.094 } 00:17:13.094 ] 00:17:13.094 } 00:17:13.094 [2024-12-07 04:01:55.690907] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:13.094 [2024-12-07 04:01:55.691038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72533 ] 00:17:13.353 [2024-12-07 04:01:55.870617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.353 [2024-12-07 04:01:55.979992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.613 Running I/O for 5 seconds... 00:17:15.925 33126.00 IOPS, 129.40 MiB/s [2024-12-07T04:01:59.597Z] 33855.50 IOPS, 132.25 MiB/s [2024-12-07T04:02:00.533Z] 36788.00 IOPS, 143.70 MiB/s [2024-12-07T04:02:01.468Z] 39336.25 IOPS, 153.66 MiB/s [2024-12-07T04:02:01.468Z] 42218.60 IOPS, 164.92 MiB/s 00:17:18.732 Latency(us) 00:17:18.732 [2024-12-07T04:02:01.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.732 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:17:18.732 xnvme_bdev : 5.00 42203.92 164.86 0.00 0.00 1511.43 158.74 10369.95 00:17:18.732 [2024-12-07T04:02:01.468Z] =================================================================================================================== 00:17:18.732 [2024-12-07T04:02:01.468Z] Total : 42203.92 164.86 0.00 0.00 1511.43 158.74 10369.95 00:17:20.108 00:17:20.108 real 0m27.311s 00:17:20.108 user 0m14.071s 00:17:20.108 sys 0m12.822s 00:17:20.108 ************************************ 00:17:20.108 END TEST xnvme_bdevperf 00:17:20.108 ************************************ 00:17:20.108 04:02:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.108 04:02:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 04:02:02 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:17:20.108 04:02:02 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:20.108 04:02:02 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.108 04:02:02 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 ************************************ 00:17:20.108 START TEST xnvme_fio_plugin 00:17:20.108 ************************************ 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:20.108 04:02:02 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:20.108 { 00:17:20.108 "subsystems": [ 00:17:20.108 { 00:17:20.108 "subsystem": "bdev", 00:17:20.108 "config": [ 00:17:20.108 { 00:17:20.108 "params": { 00:17:20.108 "io_mechanism": "io_uring_cmd", 00:17:20.108 "conserve_cpu": false, 00:17:20.108 "filename": "/dev/ng0n1", 00:17:20.108 "name": "xnvme_bdev" 00:17:20.108 }, 00:17:20.108 "method": "bdev_xnvme_create" 00:17:20.108 }, 00:17:20.108 { 00:17:20.108 "method": "bdev_wait_for_examine" 00:17:20.108 } 00:17:20.108 ] 00:17:20.108 } 00:17:20.108 ] 00:17:20.108 } 00:17:20.108 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:20.108 fio-3.35 00:17:20.108 Starting 1 thread 00:17:26.675 00:17:26.675 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72654: Sat Dec 7 04:02:08 2024 00:17:26.675 read: IOPS=23.2k, BW=90.5MiB/s (94.8MB/s)(452MiB/5001msec) 00:17:26.675 slat (usec): min=2, max=218, avg= 8.42, stdev= 3.89 00:17:26.675 clat (usec): min=932, max=6669, avg=2423.31, stdev=342.37 00:17:26.675 lat (usec): min=935, max=6697, avg=2431.73, stdev=343.64 00:17:26.675 clat percentiles (usec): 00:17:26.675 | 1.00th=[ 1188], 5.00th=[ 1745], 10.00th=[ 2114], 20.00th=[ 2245], 00:17:26.675 | 30.00th=[ 2311], 40.00th=[ 2409], 50.00th=[ 2474], 60.00th=[ 2540], 00:17:26.675 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2737], 95.00th=[ 2802], 00:17:26.675 | 99.00th=[ 2966], 99.50th=[ 3064], 99.90th=[ 3687], 99.95th=[ 6128], 00:17:26.675 | 99.99th=[ 6587] 00:17:26.675 bw ( KiB/s): min=87040, max=114688, per=100.00%, avg=92984.89, stdev=8320.77, samples=9 00:17:26.675 iops : min=21760, max=28672, avg=23246.22, stdev=2080.19, samples=9 00:17:26.675 lat (usec) : 1000=0.09% 00:17:26.675 lat (msec) : 2=6.83%, 4=93.00%, 10=0.07% 00:17:26.675 cpu : usr=40.48%, sys=57.80%, ctx=13, majf=0, minf=762 00:17:26.675 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:26.675 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:26.675 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:26.675 issued rwts: total=115808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:26.676 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:26.676 00:17:26.676 Run status group 0 (all jobs): 00:17:26.676 READ: bw=90.5MiB/s (94.8MB/s), 90.5MiB/s-90.5MiB/s (94.8MB/s-94.8MB/s), io=452MiB (474MB), run=5001-5001msec 00:17:27.611 ----------------------------------------------------- 00:17:27.611 Suppressions used: 00:17:27.611 count bytes template 00:17:27.612 1 11 /usr/src/fio/parse.c 00:17:27.612 1 8 libtcmalloc_minimal.so 00:17:27.612 1 904 libcrypto.so 00:17:27.612 ----------------------------------------------------- 00:17:27.612 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:27.612 04:02:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:27.612 { 00:17:27.612 "subsystems": [ 00:17:27.612 { 00:17:27.612 "subsystem": "bdev", 00:17:27.612 "config": [ 00:17:27.612 { 00:17:27.612 "params": { 00:17:27.612 "io_mechanism": "io_uring_cmd", 00:17:27.612 "conserve_cpu": false, 00:17:27.612 "filename": "/dev/ng0n1", 00:17:27.612 "name": "xnvme_bdev" 00:17:27.612 }, 00:17:27.612 "method": "bdev_xnvme_create" 00:17:27.612 }, 00:17:27.612 { 00:17:27.612 "method": "bdev_wait_for_examine" 00:17:27.612 } 00:17:27.612 ] 00:17:27.612 } 00:17:27.612 ] 00:17:27.612 } 00:17:27.612 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:27.612 fio-3.35 00:17:27.612 Starting 1 thread 00:17:34.182 00:17:34.182 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72752: Sat Dec 7 04:02:16 2024 00:17:34.182 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(454MiB/5002msec); 0 zone resets 00:17:34.182 slat (nsec): min=4325, max=90852, avg=8758.89, stdev=3523.35 00:17:34.182 clat (usec): min=1545, max=3698, avg=2404.87, stdev=224.54 00:17:34.182 lat (usec): min=1551, max=3727, avg=2413.63, stdev=225.10 00:17:34.182 clat percentiles (usec): 00:17:34.182 | 1.00th=[ 1876], 5.00th=[ 2057], 10.00th=[ 2147], 20.00th=[ 2212], 00:17:34.182 | 30.00th=[ 2278], 40.00th=[ 2343], 50.00th=[ 2409], 60.00th=[ 2474], 00:17:34.182 | 70.00th=[ 2540], 80.00th=[ 2606], 90.00th=[ 2671], 95.00th=[ 2737], 00:17:34.182 | 99.00th=[ 2900], 99.50th=[ 3097], 99.90th=[ 3458], 99.95th=[ 3523], 00:17:34.182 | 99.99th=[ 3589] 00:17:34.182 bw ( KiB/s): min=90624, max=96256, per=100.00%, avg=92935.78, stdev=1557.36, samples=9 00:17:34.182 iops : min=22656, max=24064, avg=23233.89, stdev=389.34, samples=9 00:17:34.182 lat (msec) : 2=2.78%, 4=97.22% 00:17:34.182 cpu : usr=41.21%, sys=57.13%, ctx=11, majf=0, minf=763 00:17:34.182 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:34.182 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:34.182 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:34.182 issued rwts: total=0,116096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:34.182 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:34.182 00:17:34.182 Run status group 0 (all jobs): 00:17:34.182 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=454MiB (476MB), run=5002-5002msec 00:17:34.751 ----------------------------------------------------- 00:17:34.751 Suppressions used: 00:17:34.751 count bytes template 00:17:34.751 1 11 /usr/src/fio/parse.c 00:17:34.751 1 8 libtcmalloc_minimal.so 00:17:34.751 1 904 libcrypto.so 00:17:34.751 ----------------------------------------------------- 00:17:34.751 00:17:34.751 00:17:34.751 real 0m14.837s 00:17:34.751 user 0m8.003s 00:17:34.751 sys 0m6.412s 00:17:34.751 04:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.751 04:02:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:34.751 ************************************ 00:17:34.751 END TEST xnvme_fio_plugin 00:17:34.751 ************************************ 00:17:34.751 04:02:17 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:17:34.751 04:02:17 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:17:34.751 04:02:17 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:17:34.751 04:02:17 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:17:34.751 04:02:17 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:34.751 04:02:17 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.751 04:02:17 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:35.011 ************************************ 00:17:35.011 START TEST xnvme_rpc 00:17:35.011 ************************************ 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72838 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72838 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72838 ']' 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.011 04:02:17 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:35.011 [2024-12-07 04:02:17.608153] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:35.011 [2024-12-07 04:02:17.608295] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72838 ] 00:17:35.271 [2024-12-07 04:02:17.790014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.271 [2024-12-07 04:02:17.892941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 xnvme_bdev 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72838 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72838 ']' 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72838 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:36.210 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72838 00:17:36.470 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:36.470 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:36.470 killing process with pid 72838 00:17:36.470 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72838' 00:17:36.470 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72838 00:17:36.470 04:02:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72838 00:17:39.071 00:17:39.071 real 0m3.774s 00:17:39.071 user 0m3.836s 00:17:39.071 sys 0m0.540s 00:17:39.071 04:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:39.071 04:02:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:39.071 ************************************ 00:17:39.071 END TEST xnvme_rpc 00:17:39.071 ************************************ 00:17:39.071 04:02:21 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:17:39.071 04:02:21 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:39.071 04:02:21 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:39.071 04:02:21 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:39.071 ************************************ 00:17:39.071 START TEST xnvme_bdevperf 00:17:39.071 ************************************ 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:39.071 04:02:21 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:39.071 { 00:17:39.071 "subsystems": [ 00:17:39.071 { 00:17:39.071 "subsystem": "bdev", 00:17:39.071 "config": [ 00:17:39.071 { 00:17:39.071 "params": { 00:17:39.071 "io_mechanism": "io_uring_cmd", 00:17:39.071 "conserve_cpu": true, 00:17:39.071 "filename": "/dev/ng0n1", 00:17:39.071 "name": "xnvme_bdev" 00:17:39.071 }, 00:17:39.071 "method": "bdev_xnvme_create" 00:17:39.071 }, 00:17:39.071 { 00:17:39.071 "method": "bdev_wait_for_examine" 00:17:39.071 } 00:17:39.071 ] 00:17:39.071 } 00:17:39.071 ] 00:17:39.071 } 00:17:39.071 [2024-12-07 04:02:21.435968] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:39.071 [2024-12-07 04:02:21.436093] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72918 ] 00:17:39.071 [2024-12-07 04:02:21.614158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.071 [2024-12-07 04:02:21.720157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.639 Running I/O for 5 seconds... 00:17:41.511 44533.00 IOPS, 173.96 MiB/s [2024-12-07T04:02:25.178Z] 44730.50 IOPS, 174.73 MiB/s [2024-12-07T04:02:26.107Z] 45734.33 IOPS, 178.65 MiB/s [2024-12-07T04:02:27.483Z] 46940.75 IOPS, 183.36 MiB/s 00:17:44.747 Latency(us) 00:17:44.747 [2024-12-07T04:02:27.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.747 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:17:44.747 xnvme_bdev : 5.00 47148.29 184.17 0.00 0.00 1353.66 171.90 5027.06 00:17:44.747 [2024-12-07T04:02:27.483Z] =================================================================================================================== 00:17:44.747 [2024-12-07T04:02:27.483Z] Total : 47148.29 184.17 0.00 0.00 1353.66 171.90 5027.06 00:17:45.684 04:02:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:45.684 04:02:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:17:45.684 04:02:28 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:45.684 04:02:28 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:45.684 04:02:28 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:45.684 { 00:17:45.684 "subsystems": [ 00:17:45.684 { 00:17:45.684 "subsystem": "bdev", 00:17:45.684 "config": [ 00:17:45.684 { 00:17:45.684 "params": { 00:17:45.684 "io_mechanism": "io_uring_cmd", 00:17:45.684 "conserve_cpu": true, 00:17:45.684 "filename": "/dev/ng0n1", 00:17:45.684 "name": "xnvme_bdev" 00:17:45.684 }, 00:17:45.684 "method": "bdev_xnvme_create" 00:17:45.684 }, 00:17:45.684 { 00:17:45.684 "method": "bdev_wait_for_examine" 00:17:45.684 } 00:17:45.684 ] 00:17:45.684 } 00:17:45.684 ] 00:17:45.684 } 00:17:45.684 [2024-12-07 04:02:28.254038] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:45.685 [2024-12-07 04:02:28.254176] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72992 ] 00:17:45.943 [2024-12-07 04:02:28.425175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.943 [2024-12-07 04:02:28.532608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.202 Running I/O for 5 seconds... 00:17:48.148 25760.00 IOPS, 100.62 MiB/s [2024-12-07T04:02:32.264Z] 24784.00 IOPS, 96.81 MiB/s [2024-12-07T04:02:33.202Z] 24394.67 IOPS, 95.29 MiB/s [2024-12-07T04:02:34.141Z] 24264.00 IOPS, 94.78 MiB/s 00:17:51.405 Latency(us) 00:17:51.405 [2024-12-07T04:02:34.141Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:51.405 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:17:51.405 xnvme_bdev : 5.00 24151.38 94.34 0.00 0.00 2640.89 881.71 8264.38 00:17:51.405 [2024-12-07T04:02:34.141Z] =================================================================================================================== 00:17:51.405 [2024-12-07T04:02:34.141Z] Total : 24151.38 94.34 0.00 0.00 2640.89 881.71 8264.38 00:17:52.347 04:02:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:52.347 04:02:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:52.347 04:02:34 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:17:52.347 04:02:34 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:52.347 04:02:34 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:52.347 { 00:17:52.347 "subsystems": [ 00:17:52.347 { 00:17:52.347 "subsystem": "bdev", 00:17:52.347 "config": [ 00:17:52.347 { 00:17:52.347 "params": { 00:17:52.347 "io_mechanism": "io_uring_cmd", 00:17:52.347 "conserve_cpu": true, 00:17:52.347 "filename": "/dev/ng0n1", 00:17:52.347 "name": "xnvme_bdev" 00:17:52.347 }, 00:17:52.347 "method": "bdev_xnvme_create" 00:17:52.347 }, 00:17:52.347 { 00:17:52.347 "method": "bdev_wait_for_examine" 00:17:52.347 } 00:17:52.347 ] 00:17:52.347 } 00:17:52.347 ] 00:17:52.347 } 00:17:52.347 [2024-12-07 04:02:35.056596] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:52.347 [2024-12-07 04:02:35.056721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73073 ] 00:17:52.606 [2024-12-07 04:02:35.237496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.864 [2024-12-07 04:02:35.344837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.122 Running I/O for 5 seconds... 00:17:54.997 71744.00 IOPS, 280.25 MiB/s [2024-12-07T04:02:39.111Z] 71520.00 IOPS, 279.38 MiB/s [2024-12-07T04:02:40.051Z] 71680.00 IOPS, 280.00 MiB/s [2024-12-07T04:02:40.998Z] 71744.00 IOPS, 280.25 MiB/s 00:17:58.262 Latency(us) 00:17:58.262 [2024-12-07T04:02:40.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.262 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:17:58.262 xnvme_bdev : 5.00 71791.21 280.43 0.00 0.00 888.80 611.93 2763.57 00:17:58.262 [2024-12-07T04:02:40.998Z] =================================================================================================================== 00:17:58.262 [2024-12-07T04:02:40.998Z] Total : 71791.21 280.43 0.00 0.00 888.80 611.93 2763.57 00:17:59.201 04:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:59.201 04:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:17:59.201 04:02:41 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:17:59.201 04:02:41 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:17:59.201 04:02:41 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:59.201 { 00:17:59.201 "subsystems": [ 00:17:59.201 { 00:17:59.201 "subsystem": "bdev", 00:17:59.201 "config": [ 00:17:59.201 { 00:17:59.201 "params": { 00:17:59.201 "io_mechanism": "io_uring_cmd", 00:17:59.201 "conserve_cpu": true, 00:17:59.201 "filename": "/dev/ng0n1", 00:17:59.201 "name": "xnvme_bdev" 00:17:59.201 }, 00:17:59.201 "method": "bdev_xnvme_create" 00:17:59.201 }, 00:17:59.201 { 00:17:59.201 "method": "bdev_wait_for_examine" 00:17:59.201 } 00:17:59.201 ] 00:17:59.201 } 00:17:59.201 ] 00:17:59.201 } 00:17:59.201 [2024-12-07 04:02:41.841715] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:17:59.201 [2024-12-07 04:02:41.841840] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73157 ] 00:17:59.461 [2024-12-07 04:02:42.021653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.461 [2024-12-07 04:02:42.127095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.029 Running I/O for 5 seconds... 00:18:01.900 48200.00 IOPS, 188.28 MiB/s [2024-12-07T04:02:45.570Z] 52052.00 IOPS, 203.33 MiB/s [2024-12-07T04:02:46.507Z] 52150.67 IOPS, 203.71 MiB/s [2024-12-07T04:02:47.882Z] 53777.25 IOPS, 210.07 MiB/s [2024-12-07T04:02:47.882Z] 53134.00 IOPS, 207.55 MiB/s 00:18:05.146 Latency(us) 00:18:05.146 [2024-12-07T04:02:47.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.146 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:18:05.146 xnvme_bdev : 5.00 53107.95 207.45 0.00 0.00 1200.40 64.15 15370.69 00:18:05.146 [2024-12-07T04:02:47.882Z] =================================================================================================================== 00:18:05.146 [2024-12-07T04:02:47.882Z] Total : 53107.95 207.45 0.00 0.00 1200.40 64.15 15370.69 00:18:06.081 00:18:06.081 real 0m27.309s 00:18:06.081 user 0m17.019s 00:18:06.081 sys 0m8.432s 00:18:06.081 04:02:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.081 04:02:48 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:06.081 ************************************ 00:18:06.081 END TEST xnvme_bdevperf 00:18:06.081 ************************************ 00:18:06.081 04:02:48 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:18:06.081 04:02:48 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:06.081 04:02:48 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.081 04:02:48 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:06.081 ************************************ 00:18:06.081 START TEST xnvme_fio_plugin 00:18:06.081 ************************************ 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:06.081 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:06.082 04:02:48 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:06.082 { 00:18:06.082 "subsystems": [ 00:18:06.082 { 00:18:06.082 "subsystem": "bdev", 00:18:06.082 "config": [ 00:18:06.082 { 00:18:06.082 "params": { 00:18:06.082 "io_mechanism": "io_uring_cmd", 00:18:06.082 "conserve_cpu": true, 00:18:06.082 "filename": "/dev/ng0n1", 00:18:06.082 "name": "xnvme_bdev" 00:18:06.082 }, 00:18:06.082 "method": "bdev_xnvme_create" 00:18:06.082 }, 00:18:06.082 { 00:18:06.082 "method": "bdev_wait_for_examine" 00:18:06.082 } 00:18:06.082 ] 00:18:06.082 } 00:18:06.082 ] 00:18:06.082 } 00:18:06.341 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:06.341 fio-3.35 00:18:06.341 Starting 1 thread 00:18:13.016 00:18:13.016 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73275: Sat Dec 7 04:02:54 2024 00:18:13.016 read: IOPS=24.8k, BW=96.8MiB/s (102MB/s)(484MiB/5002msec) 00:18:13.016 slat (nsec): min=2140, max=76033, avg=7674.43, stdev=3715.85 00:18:13.016 clat (usec): min=1080, max=5154, avg=2271.88, stdev=391.59 00:18:13.016 lat (usec): min=1083, max=5182, avg=2279.55, stdev=393.28 00:18:13.016 clat percentiles (usec): 00:18:13.016 | 1.00th=[ 1270], 5.00th=[ 1450], 10.00th=[ 1647], 20.00th=[ 1975], 00:18:13.016 | 30.00th=[ 2180], 40.00th=[ 2278], 50.00th=[ 2343], 60.00th=[ 2409], 00:18:13.016 | 70.00th=[ 2507], 80.00th=[ 2606], 90.00th=[ 2704], 95.00th=[ 2769], 00:18:13.016 | 99.00th=[ 2868], 99.50th=[ 2900], 99.90th=[ 3982], 99.95th=[ 4621], 00:18:13.016 | 99.99th=[ 5014] 00:18:13.016 bw ( KiB/s): min=90112, max=115200, per=100.00%, avg=100157.11, stdev=8596.80, samples=9 00:18:13.016 iops : min=22528, max=28800, avg=25039.22, stdev=2149.14, samples=9 00:18:13.016 lat (msec) : 2=20.70%, 4=79.20%, 10=0.10% 00:18:13.016 cpu : usr=48.03%, sys=48.31%, ctx=10, majf=0, minf=762 00:18:13.016 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:13.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.017 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:13.017 issued rwts: total=123968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.017 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:13.017 00:18:13.017 Run status group 0 (all jobs): 00:18:13.017 READ: bw=96.8MiB/s (102MB/s), 96.8MiB/s-96.8MiB/s (102MB/s-102MB/s), io=484MiB (508MB), run=5002-5002msec 00:18:13.583 ----------------------------------------------------- 00:18:13.583 Suppressions used: 00:18:13.583 count bytes template 00:18:13.583 1 11 /usr/src/fio/parse.c 00:18:13.583 1 8 libtcmalloc_minimal.so 00:18:13.583 1 904 libcrypto.so 00:18:13.583 ----------------------------------------------------- 00:18:13.583 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:13.583 04:02:56 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:18:13.583 { 00:18:13.583 "subsystems": [ 00:18:13.583 { 00:18:13.583 "subsystem": "bdev", 00:18:13.583 "config": [ 00:18:13.583 { 00:18:13.583 "params": { 00:18:13.583 "io_mechanism": "io_uring_cmd", 00:18:13.583 "conserve_cpu": true, 00:18:13.583 "filename": "/dev/ng0n1", 00:18:13.583 "name": "xnvme_bdev" 00:18:13.583 }, 00:18:13.583 "method": "bdev_xnvme_create" 00:18:13.583 }, 00:18:13.583 { 00:18:13.583 "method": "bdev_wait_for_examine" 00:18:13.583 } 00:18:13.583 ] 00:18:13.583 } 00:18:13.583 ] 00:18:13.583 } 00:18:13.842 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:18:13.842 fio-3.35 00:18:13.842 Starting 1 thread 00:18:20.417 00:18:20.417 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73367: Sat Dec 7 04:03:02 2024 00:18:20.417 write: IOPS=22.1k, BW=86.4MiB/s (90.6MB/s)(432MiB/5002msec); 0 zone resets 00:18:20.417 slat (usec): min=4, max=130, avg= 8.71, stdev= 3.72 00:18:20.417 clat (usec): min=1566, max=5552, avg=2544.35, stdev=329.21 00:18:20.417 lat (usec): min=1571, max=5579, avg=2553.06, stdev=329.96 00:18:20.417 clat percentiles (usec): 00:18:20.417 | 1.00th=[ 1778], 5.00th=[ 1991], 10.00th=[ 2147], 20.00th=[ 2278], 00:18:20.417 | 30.00th=[ 2376], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2638], 00:18:20.417 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 2966], 95.00th=[ 3097], 00:18:20.417 | 99.00th=[ 3228], 99.50th=[ 3294], 99.90th=[ 3392], 99.95th=[ 5080], 00:18:20.417 | 99.99th=[ 5473] 00:18:20.417 bw ( KiB/s): min=77824, max=96256, per=100.00%, avg=89372.44, stdev=6744.57, samples=9 00:18:20.417 iops : min=19456, max=24064, avg=22343.11, stdev=1686.14, samples=9 00:18:20.417 lat (msec) : 2=5.03%, 4=94.91%, 10=0.06% 00:18:20.417 cpu : usr=43.75%, sys=52.53%, ctx=7, majf=0, minf=763 00:18:20.417 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:18:20.417 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:20.417 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:18:20.417 issued rwts: total=0,110656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:20.417 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:20.417 00:18:20.417 Run status group 0 (all jobs): 00:18:20.417 WRITE: bw=86.4MiB/s (90.6MB/s), 86.4MiB/s-86.4MiB/s (90.6MB/s-90.6MB/s), io=432MiB (453MB), run=5002-5002msec 00:18:20.985 ----------------------------------------------------- 00:18:20.985 Suppressions used: 00:18:20.985 count bytes template 00:18:20.985 1 11 /usr/src/fio/parse.c 00:18:20.985 1 8 libtcmalloc_minimal.so 00:18:20.985 1 904 libcrypto.so 00:18:20.985 ----------------------------------------------------- 00:18:20.985 00:18:20.985 00:18:20.985 real 0m14.740s 00:18:20.985 user 0m8.446s 00:18:20.985 sys 0m5.671s 00:18:20.985 04:03:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.985 04:03:03 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:18:20.985 ************************************ 00:18:20.985 END TEST xnvme_fio_plugin 00:18:20.985 ************************************ 00:18:20.985 04:03:03 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72838 00:18:20.985 04:03:03 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72838 ']' 00:18:20.985 04:03:03 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72838 00:18:20.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72838) - No such process 00:18:20.985 Process with pid 72838 is not found 00:18:20.985 04:03:03 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72838 is not found' 00:18:20.985 ************************************ 00:18:20.985 END TEST nvme_xnvme 00:18:20.985 00:18:20.985 real 3m50.309s 00:18:20.985 user 2m4.133s 00:18:20.985 sys 1m28.149s 00:18:20.985 04:03:03 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.985 04:03:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:20.985 ************************************ 00:18:20.985 04:03:03 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:20.985 04:03:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:20.985 04:03:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.985 04:03:03 -- common/autotest_common.sh@10 -- # set +x 00:18:20.985 ************************************ 00:18:20.985 START TEST blockdev_xnvme 00:18:20.985 ************************************ 00:18:20.985 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:18:20.985 * Looking for test storage... 00:18:20.985 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:20.985 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:20.985 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:18:20.985 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:21.245 04:03:03 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:21.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.245 --rc genhtml_branch_coverage=1 00:18:21.245 --rc genhtml_function_coverage=1 00:18:21.245 --rc genhtml_legend=1 00:18:21.245 --rc geninfo_all_blocks=1 00:18:21.245 --rc geninfo_unexecuted_blocks=1 00:18:21.245 00:18:21.245 ' 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:21.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.245 --rc genhtml_branch_coverage=1 00:18:21.245 --rc genhtml_function_coverage=1 00:18:21.245 --rc genhtml_legend=1 00:18:21.245 --rc geninfo_all_blocks=1 00:18:21.245 --rc geninfo_unexecuted_blocks=1 00:18:21.245 00:18:21.245 ' 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:21.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.245 --rc genhtml_branch_coverage=1 00:18:21.245 --rc genhtml_function_coverage=1 00:18:21.245 --rc genhtml_legend=1 00:18:21.245 --rc geninfo_all_blocks=1 00:18:21.245 --rc geninfo_unexecuted_blocks=1 00:18:21.245 00:18:21.245 ' 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:21.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:21.245 --rc genhtml_branch_coverage=1 00:18:21.245 --rc genhtml_function_coverage=1 00:18:21.245 --rc genhtml_legend=1 00:18:21.245 --rc geninfo_all_blocks=1 00:18:21.245 --rc geninfo_unexecuted_blocks=1 00:18:21.245 00:18:21.245 ' 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73508 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:21.245 04:03:03 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73508 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73508 ']' 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.245 04:03:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:21.245 [2024-12-07 04:03:03.921230] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:18:21.245 [2024-12-07 04:03:03.921551] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73508 ] 00:18:21.505 [2024-12-07 04:03:04.124093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.505 [2024-12-07 04:03:04.230777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.441 04:03:05 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.441 04:03:05 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:18:22.441 04:03:05 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:18:22.441 04:03:05 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:18:22.442 04:03:05 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:18:22.442 04:03:05 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:18:22.442 04:03:05 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:18:23.007 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:23.576 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:18:23.834 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:18:23.834 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:18:23.834 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1c1n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:18:23.834 nvme0n1 00:18:23.834 nvme0n2 00:18:23.834 nvme0n3 00:18:23.834 nvme1n1 00:18:23.834 nvme2n1 00:18:23.834 nvme3n1 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.834 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.834 04:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ec380dc7-5be2-4bfe-9afc-aa304d667220"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec380dc7-5be2-4bfe-9afc-aa304d667220",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "efca8352-92af-4da1-a163-1d7265c5b617"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "efca8352-92af-4da1-a163-1d7265c5b617",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d7028c2f-6846-40b9-85d5-f1a05bd2cd8c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7028c2f-6846-40b9-85d5-f1a05bd2cd8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "565e99be-bd37-494f-a6b5-e6fa2456d920"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "565e99be-bd37-494f-a6b5-e6fa2456d920",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e01b59f3-a913-49b4-9ba7-18134776de46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e01b59f3-a913-49b4-9ba7-18134776de46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d9dde86e-d51a-4c47-b57a-1b1c04051324"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d9dde86e-d51a-4c47-b57a-1b1c04051324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:18:24.092 04:03:06 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73508 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73508 ']' 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73508 00:18:24.092 04:03:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73508 00:18:24.093 killing process with pid 73508 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73508' 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73508 00:18:24.093 04:03:06 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73508 00:18:26.624 04:03:09 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:26.624 04:03:09 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:26.624 04:03:09 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:26.624 04:03:09 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.624 04:03:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:26.624 ************************************ 00:18:26.624 START TEST bdev_hello_world 00:18:26.624 ************************************ 00:18:26.624 04:03:09 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:18:26.624 [2024-12-07 04:03:09.152061] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:18:26.624 [2024-12-07 04:03:09.152173] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73803 ] 00:18:26.624 [2024-12-07 04:03:09.333076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.884 [2024-12-07 04:03:09.441122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.144 [2024-12-07 04:03:09.859618] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:27.144 [2024-12-07 04:03:09.859839] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:18:27.144 [2024-12-07 04:03:09.859869] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:27.144 [2024-12-07 04:03:09.862103] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:27.144 [2024-12-07 04:03:09.862427] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:27.144 [2024-12-07 04:03:09.862449] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:27.144 [2024-12-07 04:03:09.862670] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:27.144 00:18:27.144 [2024-12-07 04:03:09.862695] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:28.524 00:18:28.524 real 0m1.892s 00:18:28.524 user 0m1.531s 00:18:28.524 sys 0m0.242s 00:18:28.524 ************************************ 00:18:28.524 END TEST bdev_hello_world 00:18:28.524 ************************************ 00:18:28.524 04:03:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.524 04:03:10 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 04:03:11 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:18:28.524 04:03:11 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:28.524 04:03:11 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.524 04:03:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:28.524 ************************************ 00:18:28.524 START TEST bdev_bounds 00:18:28.524 ************************************ 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73845 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73845' 00:18:28.524 Process bdevio pid: 73845 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73845 00:18:28.524 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73845 ']' 00:18:28.525 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.525 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.525 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.525 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.525 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:28.525 [2024-12-07 04:03:11.121512] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:18:28.525 [2024-12-07 04:03:11.121653] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73845 ] 00:18:28.784 [2024-12-07 04:03:11.304420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:28.784 [2024-12-07 04:03:11.414132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.784 [2024-12-07 04:03:11.414255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.784 [2024-12-07 04:03:11.414286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.353 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.353 04:03:11 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:29.353 04:03:11 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:29.353 I/O targets: 00:18:29.353 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:29.353 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:29.353 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:18:29.353 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:18:29.353 nvme2n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:29.353 nvme3n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:18:29.353 00:18:29.353 00:18:29.353 CUnit - A unit testing framework for C - Version 2.1-3 00:18:29.353 http://cunit.sourceforge.net/ 00:18:29.353 00:18:29.353 00:18:29.353 Suite: bdevio tests on: nvme3n1 00:18:29.353 Test: blockdev write read block ...passed 00:18:29.353 Test: blockdev write zeroes read block ...passed 00:18:29.353 Test: blockdev write zeroes read no split ...passed 00:18:29.353 Test: blockdev write zeroes read split ...passed 00:18:29.613 Test: blockdev write zeroes read split partial ...passed 00:18:29.613 Test: blockdev reset ...passed 00:18:29.613 Test: blockdev write read 8 blocks ...passed 00:18:29.613 Test: blockdev write read size > 128k ...passed 00:18:29.613 Test: blockdev write read invalid size ...passed 00:18:29.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.613 Test: blockdev write read max offset ...passed 00:18:29.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.613 Test: blockdev writev readv 8 blocks ...passed 00:18:29.613 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.613 Test: blockdev writev readv block ...passed 00:18:29.613 Test: blockdev writev readv size > 128k ...passed 00:18:29.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.613 Test: blockdev comparev and writev ...passed 00:18:29.613 Test: blockdev nvme passthru rw ...passed 00:18:29.613 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.613 Test: blockdev nvme admin passthru ...passed 00:18:29.613 Test: blockdev copy ...passed 00:18:29.613 Suite: bdevio tests on: nvme2n1 00:18:29.613 Test: blockdev write read block ...passed 00:18:29.613 Test: blockdev write zeroes read block ...passed 00:18:29.613 Test: blockdev write zeroes read no split ...passed 00:18:29.613 Test: blockdev write zeroes read split ...passed 00:18:29.613 Test: blockdev write zeroes read split partial ...passed 00:18:29.613 Test: blockdev reset ...passed 00:18:29.613 Test: blockdev write read 8 blocks ...passed 00:18:29.613 Test: blockdev write read size > 128k ...passed 00:18:29.613 Test: blockdev write read invalid size ...passed 00:18:29.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.613 Test: blockdev write read max offset ...passed 00:18:29.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.613 Test: blockdev writev readv 8 blocks ...passed 00:18:29.613 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.613 Test: blockdev writev readv block ...passed 00:18:29.613 Test: blockdev writev readv size > 128k ...passed 00:18:29.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.613 Test: blockdev comparev and writev ...passed 00:18:29.613 Test: blockdev nvme passthru rw ...passed 00:18:29.613 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.613 Test: blockdev nvme admin passthru ...passed 00:18:29.613 Test: blockdev copy ...passed 00:18:29.613 Suite: bdevio tests on: nvme1n1 00:18:29.613 Test: blockdev write read block ...passed 00:18:29.613 Test: blockdev write zeroes read block ...passed 00:18:29.613 Test: blockdev write zeroes read no split ...passed 00:18:29.613 Test: blockdev write zeroes read split ...passed 00:18:29.613 Test: blockdev write zeroes read split partial ...passed 00:18:29.613 Test: blockdev reset ...passed 00:18:29.613 Test: blockdev write read 8 blocks ...passed 00:18:29.613 Test: blockdev write read size > 128k ...passed 00:18:29.613 Test: blockdev write read invalid size ...passed 00:18:29.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.613 Test: blockdev write read max offset ...passed 00:18:29.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.613 Test: blockdev writev readv 8 blocks ...passed 00:18:29.613 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.613 Test: blockdev writev readv block ...passed 00:18:29.613 Test: blockdev writev readv size > 128k ...passed 00:18:29.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.613 Test: blockdev comparev and writev ...passed 00:18:29.613 Test: blockdev nvme passthru rw ...passed 00:18:29.613 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.613 Test: blockdev nvme admin passthru ...passed 00:18:29.613 Test: blockdev copy ...passed 00:18:29.613 Suite: bdevio tests on: nvme0n3 00:18:29.613 Test: blockdev write read block ...passed 00:18:29.613 Test: blockdev write zeroes read block ...passed 00:18:29.613 Test: blockdev write zeroes read no split ...passed 00:18:29.613 Test: blockdev write zeroes read split ...passed 00:18:29.613 Test: blockdev write zeroes read split partial ...passed 00:18:29.613 Test: blockdev reset ...passed 00:18:29.613 Test: blockdev write read 8 blocks ...passed 00:18:29.613 Test: blockdev write read size > 128k ...passed 00:18:29.613 Test: blockdev write read invalid size ...passed 00:18:29.613 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.613 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.613 Test: blockdev write read max offset ...passed 00:18:29.613 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.613 Test: blockdev writev readv 8 blocks ...passed 00:18:29.613 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.613 Test: blockdev writev readv block ...passed 00:18:29.613 Test: blockdev writev readv size > 128k ...passed 00:18:29.613 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.613 Test: blockdev comparev and writev ...passed 00:18:29.613 Test: blockdev nvme passthru rw ...passed 00:18:29.613 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.613 Test: blockdev nvme admin passthru ...passed 00:18:29.613 Test: blockdev copy ...passed 00:18:29.613 Suite: bdevio tests on: nvme0n2 00:18:29.613 Test: blockdev write read block ...passed 00:18:29.613 Test: blockdev write zeroes read block ...passed 00:18:29.873 Test: blockdev write zeroes read no split ...passed 00:18:29.873 Test: blockdev write zeroes read split ...passed 00:18:29.873 Test: blockdev write zeroes read split partial ...passed 00:18:29.873 Test: blockdev reset ...passed 00:18:29.873 Test: blockdev write read 8 blocks ...passed 00:18:29.873 Test: blockdev write read size > 128k ...passed 00:18:29.873 Test: blockdev write read invalid size ...passed 00:18:29.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.873 Test: blockdev write read max offset ...passed 00:18:29.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.873 Test: blockdev writev readv 8 blocks ...passed 00:18:29.873 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.873 Test: blockdev writev readv block ...passed 00:18:29.873 Test: blockdev writev readv size > 128k ...passed 00:18:29.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.873 Test: blockdev comparev and writev ...passed 00:18:29.873 Test: blockdev nvme passthru rw ...passed 00:18:29.873 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.873 Test: blockdev nvme admin passthru ...passed 00:18:29.873 Test: blockdev copy ...passed 00:18:29.873 Suite: bdevio tests on: nvme0n1 00:18:29.873 Test: blockdev write read block ...passed 00:18:29.873 Test: blockdev write zeroes read block ...passed 00:18:29.873 Test: blockdev write zeroes read no split ...passed 00:18:29.873 Test: blockdev write zeroes read split ...passed 00:18:29.873 Test: blockdev write zeroes read split partial ...passed 00:18:29.873 Test: blockdev reset ...passed 00:18:29.873 Test: blockdev write read 8 blocks ...passed 00:18:29.873 Test: blockdev write read size > 128k ...passed 00:18:29.873 Test: blockdev write read invalid size ...passed 00:18:29.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:29.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:29.873 Test: blockdev write read max offset ...passed 00:18:29.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:29.873 Test: blockdev writev readv 8 blocks ...passed 00:18:29.873 Test: blockdev writev readv 30 x 1block ...passed 00:18:29.873 Test: blockdev writev readv block ...passed 00:18:29.873 Test: blockdev writev readv size > 128k ...passed 00:18:29.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:29.873 Test: blockdev comparev and writev ...passed 00:18:29.873 Test: blockdev nvme passthru rw ...passed 00:18:29.873 Test: blockdev nvme passthru vendor specific ...passed 00:18:29.873 Test: blockdev nvme admin passthru ...passed 00:18:29.873 Test: blockdev copy ...passed 00:18:29.873 00:18:29.873 Run Summary: Type Total Ran Passed Failed Inactive 00:18:29.873 suites 6 6 n/a 0 0 00:18:29.873 tests 138 138 138 0 0 00:18:29.873 asserts 780 780 780 0 n/a 00:18:29.873 00:18:29.873 Elapsed time = 1.315 seconds 00:18:29.873 0 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73845 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73845 ']' 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73845 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73845 00:18:29.873 killing process with pid 73845 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73845' 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73845 00:18:29.873 04:03:12 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73845 00:18:31.264 04:03:13 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:31.264 ************************************ 00:18:31.264 END TEST bdev_bounds 00:18:31.264 ************************************ 00:18:31.264 00:18:31.264 real 0m2.737s 00:18:31.264 user 0m6.731s 00:18:31.264 sys 0m0.404s 00:18:31.264 04:03:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.264 04:03:13 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:31.264 04:03:13 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:31.264 04:03:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:31.264 04:03:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.264 04:03:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:31.264 ************************************ 00:18:31.264 START TEST bdev_nbd 00:18:31.264 ************************************ 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73905 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73905 /var/tmp/spdk-nbd.sock 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73905 ']' 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.264 04:03:13 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:31.264 [2024-12-07 04:03:13.945341] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:18:31.264 [2024-12-07 04:03:13.945460] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.523 [2024-12-07 04:03:14.130260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.782 [2024-12-07 04:03:14.264392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.351 04:03:14 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.351 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.351 1+0 records in 00:18:32.352 1+0 records out 00:18:32.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748483 s, 5.5 MB/s 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.352 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:18:32.610 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:18:32.610 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:18:32.610 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.611 1+0 records in 00:18:32.611 1+0 records out 00:18:32.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000788136 s, 5.2 MB/s 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.611 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:32.869 1+0 records in 00:18:32.869 1+0 records out 00:18:32.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669531 s, 6.1 MB/s 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:32.869 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.128 1+0 records in 00:18:33.128 1+0 records out 00:18:33.128 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105707 s, 3.9 MB/s 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:33.128 04:03:15 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.387 1+0 records in 00:18:33.387 1+0 records out 00:18:33.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651749 s, 6.3 MB/s 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:33.387 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:33.645 1+0 records in 00:18:33.645 1+0 records out 00:18:33.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106345 s, 3.9 MB/s 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:18:33.645 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:33.903 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd0", 00:18:33.903 "bdev_name": "nvme0n1" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd1", 00:18:33.903 "bdev_name": "nvme0n2" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd2", 00:18:33.903 "bdev_name": "nvme0n3" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd3", 00:18:33.903 "bdev_name": "nvme1n1" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd4", 00:18:33.903 "bdev_name": "nvme2n1" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd5", 00:18:33.903 "bdev_name": "nvme3n1" 00:18:33.903 } 00:18:33.903 ]' 00:18:33.903 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:33.903 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd0", 00:18:33.903 "bdev_name": "nvme0n1" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd1", 00:18:33.903 "bdev_name": "nvme0n2" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd2", 00:18:33.903 "bdev_name": "nvme0n3" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd3", 00:18:33.903 "bdev_name": "nvme1n1" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd4", 00:18:33.903 "bdev_name": "nvme2n1" 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "nbd_device": "/dev/nbd5", 00:18:33.903 "bdev_name": "nvme3n1" 00:18:33.903 } 00:18:33.903 ]' 00:18:33.903 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:33.904 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.162 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:34.420 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:34.420 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:34.420 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:34.420 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.420 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.420 04:03:16 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:34.420 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.420 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.420 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.420 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:18:34.679 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.938 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:34.939 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.198 04:03:17 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:35.458 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:35.459 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:35.459 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:35.459 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:18:35.719 /dev/nbd0 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:35.719 1+0 records in 00:18:35.719 1+0 records out 00:18:35.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759493 s, 5.4 MB/s 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:35.719 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:18:35.979 /dev/nbd1 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:35.979 1+0 records in 00:18:35.979 1+0 records out 00:18:35.979 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000748232 s, 5.5 MB/s 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:35.979 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:18:36.240 /dev/nbd10 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.240 1+0 records in 00:18:36.240 1+0 records out 00:18:36.240 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00272025 s, 1.5 MB/s 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.240 04:03:18 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:18:36.500 /dev/nbd11 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.500 1+0 records in 00:18:36.500 1+0 records out 00:18:36.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011408 s, 3.6 MB/s 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.500 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:18:36.759 /dev/nbd12 00:18:36.759 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:36.760 1+0 records in 00:18:36.760 1+0 records out 00:18:36.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000964486 s, 4.2 MB/s 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:36.760 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:18:37.019 /dev/nbd13 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:37.019 1+0 records in 00:18:37.019 1+0 records out 00:18:37.019 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111201 s, 3.7 MB/s 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:37.019 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:37.278 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd0", 00:18:37.278 "bdev_name": "nvme0n1" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd1", 00:18:37.278 "bdev_name": "nvme0n2" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd10", 00:18:37.278 "bdev_name": "nvme0n3" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd11", 00:18:37.278 "bdev_name": "nvme1n1" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd12", 00:18:37.278 "bdev_name": "nvme2n1" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd13", 00:18:37.278 "bdev_name": "nvme3n1" 00:18:37.278 } 00:18:37.278 ]' 00:18:37.278 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd0", 00:18:37.278 "bdev_name": "nvme0n1" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd1", 00:18:37.278 "bdev_name": "nvme0n2" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd10", 00:18:37.278 "bdev_name": "nvme0n3" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd11", 00:18:37.278 "bdev_name": "nvme1n1" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd12", 00:18:37.278 "bdev_name": "nvme2n1" 00:18:37.278 }, 00:18:37.278 { 00:18:37.278 "nbd_device": "/dev/nbd13", 00:18:37.278 "bdev_name": "nvme3n1" 00:18:37.278 } 00:18:37.278 ]' 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:18:37.279 /dev/nbd1 00:18:37.279 /dev/nbd10 00:18:37.279 /dev/nbd11 00:18:37.279 /dev/nbd12 00:18:37.279 /dev/nbd13' 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:18:37.279 /dev/nbd1 00:18:37.279 /dev/nbd10 00:18:37.279 /dev/nbd11 00:18:37.279 /dev/nbd12 00:18:37.279 /dev/nbd13' 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:37.279 256+0 records in 00:18:37.279 256+0 records out 00:18:37.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122731 s, 85.4 MB/s 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.279 04:03:19 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:37.537 256+0 records in 00:18:37.537 256+0 records out 00:18:37.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120034 s, 8.7 MB/s 00:18:37.537 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.537 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:18:37.537 256+0 records in 00:18:37.537 256+0 records out 00:18:37.537 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125346 s, 8.4 MB/s 00:18:37.537 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.537 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:18:37.797 256+0 records in 00:18:37.797 256+0 records out 00:18:37.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129264 s, 8.1 MB/s 00:18:37.797 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.797 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:18:37.797 256+0 records in 00:18:37.797 256+0 records out 00:18:37.797 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.140349 s, 7.5 MB/s 00:18:37.797 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:37.797 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:18:38.056 256+0 records in 00:18:38.056 256+0 records out 00:18:38.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126153 s, 8.3 MB/s 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:18:38.056 256+0 records in 00:18:38.056 256+0 records out 00:18:38.056 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156066 s, 6.7 MB/s 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.056 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:18:38.316 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.316 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:18:38.316 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.317 04:03:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.576 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:38.835 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.095 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:39.355 04:03:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:39.614 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:39.615 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.615 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:39.873 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:40.132 malloc_lvol_verify 00:18:40.132 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:40.392 2c6e32ac-2576-4ca2-ae81-519c3200d735 00:18:40.392 04:03:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:40.392 723b034a-b0c1-45e4-98eb-9cc0d0a0a1f5 00:18:40.392 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:40.651 /dev/nbd0 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:40.651 mke2fs 1.47.0 (5-Feb-2023) 00:18:40.651 Discarding device blocks: 0/4096 done 00:18:40.651 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:40.651 00:18:40.651 Allocating group tables: 0/1 done 00:18:40.651 Writing inode tables: 0/1 done 00:18:40.651 Creating journal (1024 blocks): done 00:18:40.651 Writing superblocks and filesystem accounting information: 0/1 done 00:18:40.651 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:40.651 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73905 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73905 ']' 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73905 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73905 00:18:40.910 killing process with pid 73905 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:40.910 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:40.911 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73905' 00:18:40.911 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73905 00:18:40.911 04:03:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73905 00:18:42.290 04:03:24 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:42.290 00:18:42.290 real 0m10.885s 00:18:42.290 user 0m13.721s 00:18:42.290 sys 0m4.782s 00:18:42.290 04:03:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:42.290 ************************************ 00:18:42.290 END TEST bdev_nbd 00:18:42.290 ************************************ 00:18:42.290 04:03:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:42.290 04:03:24 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:18:42.290 04:03:24 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:18:42.290 04:03:24 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:18:42.290 04:03:24 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:18:42.290 04:03:24 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:42.290 04:03:24 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.290 04:03:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:42.290 ************************************ 00:18:42.290 START TEST bdev_fio 00:18:42.290 ************************************ 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:42.290 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:42.290 04:03:24 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:42.290 ************************************ 00:18:42.290 START TEST bdev_fio_rw_verify 00:18:42.291 ************************************ 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:42.291 04:03:24 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:42.550 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.550 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.550 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.550 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.550 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.550 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:42.550 fio-3.35 00:18:42.550 Starting 6 threads 00:18:54.908 00:18:54.908 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74313: Sat Dec 7 04:03:36 2024 00:18:54.908 read: IOPS=34.3k, BW=134MiB/s (141MB/s)(1340MiB/10001msec) 00:18:54.908 slat (usec): min=2, max=1487, avg= 7.81, stdev= 7.10 00:18:54.908 clat (usec): min=98, max=10248, avg=521.67, stdev=234.05 00:18:54.908 lat (usec): min=103, max=10309, avg=529.49, stdev=235.38 00:18:54.908 clat percentiles (usec): 00:18:54.908 | 50.000th=[ 510], 99.000th=[ 1188], 99.900th=[ 2057], 99.990th=[ 4621], 00:18:54.908 | 99.999th=[10159] 00:18:54.908 write: IOPS=34.5k, BW=135MiB/s (141MB/s)(1349MiB/10001msec); 0 zone resets 00:18:54.908 slat (usec): min=10, max=2317, avg=25.75, stdev=32.28 00:18:54.908 clat (usec): min=82, max=8751, avg=622.74, stdev=270.92 00:18:54.908 lat (usec): min=101, max=8768, avg=648.49, stdev=276.31 00:18:54.908 clat percentiles (usec): 00:18:54.908 | 50.000th=[ 594], 99.000th=[ 1467], 99.900th=[ 2474], 99.990th=[ 4686], 00:18:54.908 | 99.999th=[ 8586] 00:18:54.908 bw ( KiB/s): min=111889, max=166456, per=100.00%, avg=139086.42, stdev=2338.04, samples=114 00:18:54.908 iops : min=27972, max=41614, avg=34771.37, stdev=584.50, samples=114 00:18:54.908 lat (usec) : 100=0.01%, 250=6.97%, 500=33.70%, 750=40.88%, 1000=13.76% 00:18:54.908 lat (msec) : 2=4.51%, 4=0.16%, 10=0.02%, 20=0.01% 00:18:54.908 cpu : usr=57.02%, sys=27.88%, ctx=8337, majf=0, minf=28264 00:18:54.908 IO depths : 1=11.7%, 2=24.1%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:54.908 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.908 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.908 issued rwts: total=343072,345256,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.908 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:54.908 00:18:54.908 Run status group 0 (all jobs): 00:18:54.908 READ: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=1340MiB (1405MB), run=10001-10001msec 00:18:54.908 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=1349MiB (1414MB), run=10001-10001msec 00:18:54.908 ----------------------------------------------------- 00:18:54.908 Suppressions used: 00:18:54.908 count bytes template 00:18:54.908 6 48 /usr/src/fio/parse.c 00:18:54.908 1946 186816 /usr/src/fio/iolog.c 00:18:54.908 1 8 libtcmalloc_minimal.so 00:18:54.908 1 904 libcrypto.so 00:18:54.908 ----------------------------------------------------- 00:18:54.908 00:18:54.908 00:18:54.908 real 0m12.547s 00:18:54.908 user 0m36.252s 00:18:54.908 sys 0m17.148s 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.908 ************************************ 00:18:54.908 END TEST bdev_fio_rw_verify 00:18:54.908 ************************************ 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:54.908 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "ec380dc7-5be2-4bfe-9afc-aa304d667220"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ec380dc7-5be2-4bfe-9afc-aa304d667220",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "efca8352-92af-4da1-a163-1d7265c5b617"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "efca8352-92af-4da1-a163-1d7265c5b617",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d7028c2f-6846-40b9-85d5-f1a05bd2cd8c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7028c2f-6846-40b9-85d5-f1a05bd2cd8c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "565e99be-bd37-494f-a6b5-e6fa2456d920"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "565e99be-bd37-494f-a6b5-e6fa2456d920",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "e01b59f3-a913-49b4-9ba7-18134776de46"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "e01b59f3-a913-49b4-9ba7-18134776de46",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "d9dde86e-d51a-4c47-b57a-1b1c04051324"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "d9dde86e-d51a-4c47-b57a-1b1c04051324",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:54.909 /home/vagrant/spdk_repo/spdk 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:54.909 00:18:54.909 real 0m12.785s 00:18:54.909 user 0m36.366s 00:18:54.909 sys 0m17.276s 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:54.909 04:03:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:54.909 ************************************ 00:18:54.909 END TEST bdev_fio 00:18:54.909 ************************************ 00:18:55.169 04:03:37 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:55.169 04:03:37 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:55.169 04:03:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:55.169 04:03:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.169 04:03:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:55.169 ************************************ 00:18:55.169 START TEST bdev_verify 00:18:55.169 ************************************ 00:18:55.169 04:03:37 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:55.169 [2024-12-07 04:03:37.759786] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:18:55.169 [2024-12-07 04:03:37.759904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74492 ] 00:18:55.428 [2024-12-07 04:03:37.938553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:55.428 [2024-12-07 04:03:38.050208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.428 [2024-12-07 04:03:38.050240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.998 Running I/O for 5 seconds... 00:18:58.317 23968.00 IOPS, 93.62 MiB/s [2024-12-07T04:03:41.991Z] 22737.50 IOPS, 88.82 MiB/s [2024-12-07T04:03:42.929Z] 21119.33 IOPS, 82.50 MiB/s [2024-12-07T04:03:43.866Z] 20247.50 IOPS, 79.09 MiB/s [2024-12-07T04:03:43.866Z] 19878.00 IOPS, 77.65 MiB/s 00:19:01.130 Latency(us) 00:19:01.130 [2024-12-07T04:03:43.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:01.130 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x0 length 0x80000 00:19:01.130 nvme0n1 : 5.03 1424.62 5.56 0.00 0.00 89707.42 9369.81 85065.20 00:19:01.130 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x80000 length 0x80000 00:19:01.130 nvme0n1 : 5.03 1652.73 6.46 0.00 0.00 77338.25 11633.30 81275.17 00:19:01.130 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x0 length 0x80000 00:19:01.130 nvme0n2 : 5.07 1414.81 5.53 0.00 0.00 90200.15 11370.10 88434.12 00:19:01.130 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x80000 length 0x80000 00:19:01.130 nvme0n2 : 5.04 1652.35 6.45 0.00 0.00 77224.80 17160.43 79169.59 00:19:01.130 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x0 length 0x80000 00:19:01.130 nvme0n3 : 5.07 1413.09 5.52 0.00 0.00 90178.61 15370.69 95593.07 00:19:01.130 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x80000 length 0x80000 00:19:01.130 nvme0n3 : 5.06 1642.80 6.42 0.00 0.00 77563.26 11843.86 73695.10 00:19:01.130 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x0 length 0x20000 00:19:01.130 nvme1n1 : 5.08 1336.61 5.22 0.00 0.00 95195.21 15054.86 133072.30 00:19:01.130 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x20000 length 0x20000 00:19:01.130 nvme1n1 : 5.05 1598.16 6.24 0.00 0.00 79603.30 10106.76 123807.77 00:19:01.130 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x0 length 0xa0000 00:19:01.130 nvme2n1 : 5.08 1284.98 5.02 0.00 0.00 98870.38 10317.31 171814.86 00:19:01.130 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0xa0000 length 0xa0000 00:19:01.130 nvme2n1 : 5.07 1490.81 5.82 0.00 0.00 85218.70 10738.43 116227.70 00:19:01.130 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0x0 length 0xbd0bd 00:19:01.130 nvme3n1 : 5.06 2326.39 9.09 0.00 0.00 54487.81 6448.32 74537.33 00:19:01.130 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:01.130 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:19:01.130 nvme3n1 : 5.07 2511.66 9.81 0.00 0.00 50439.15 4211.15 67799.49 00:19:01.130 [2024-12-07T04:03:43.866Z] =================================================================================================================== 00:19:01.130 [2024-12-07T04:03:43.866Z] Total : 19749.00 77.14 0.00 0.00 77355.13 4211.15 171814.86 00:19:02.071 00:19:02.071 real 0m7.136s 00:19:02.071 user 0m11.094s 00:19:02.071 sys 0m1.871s 00:19:02.071 04:03:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:02.071 04:03:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:02.071 ************************************ 00:19:02.071 END TEST bdev_verify 00:19:02.071 ************************************ 00:19:02.332 04:03:44 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:02.332 04:03:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:02.332 04:03:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.332 04:03:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:02.332 ************************************ 00:19:02.332 START TEST bdev_verify_big_io 00:19:02.332 ************************************ 00:19:02.332 04:03:44 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:02.332 [2024-12-07 04:03:44.972733] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:02.332 [2024-12-07 04:03:44.972846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74591 ] 00:19:02.591 [2024-12-07 04:03:45.162822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.591 [2024-12-07 04:03:45.278344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.591 [2024-12-07 04:03:45.278372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.159 Running I/O for 5 seconds... 00:19:07.882 2304.00 IOPS, 144.00 MiB/s [2024-12-07T04:03:51.556Z] 3032.00 IOPS, 189.50 MiB/s [2024-12-07T04:03:52.124Z] 3689.33 IOPS, 230.58 MiB/s [2024-12-07T04:03:52.124Z] 3262.25 IOPS, 203.89 MiB/s 00:19:09.388 Latency(us) 00:19:09.388 [2024-12-07T04:03:52.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.388 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x0 length 0x8000 00:19:09.388 nvme0n1 : 5.61 114.01 7.13 0.00 0.00 1077290.65 11106.90 1408208.09 00:19:09.388 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x8000 length 0x8000 00:19:09.388 nvme0n1 : 5.40 233.87 14.62 0.00 0.00 535498.37 5369.21 818647.29 00:19:09.388 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x0 length 0x8000 00:19:09.388 nvme0n2 : 5.54 115.52 7.22 0.00 0.00 1019289.11 39795.35 1105005.39 00:19:09.388 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x8000 length 0x8000 00:19:09.388 nvme0n2 : 5.41 230.74 14.42 0.00 0.00 534401.00 96435.30 1165645.93 00:19:09.388 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x0 length 0x8000 00:19:09.388 nvme0n3 : 5.70 129.04 8.07 0.00 0.00 870567.72 61061.65 1320616.20 00:19:09.388 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x8000 length 0x8000 00:19:09.388 nvme0n3 : 5.41 248.37 15.52 0.00 0.00 488645.05 84222.97 693997.29 00:19:09.388 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x0 length 0x2000 00:19:09.388 nvme1n1 : 5.83 125.81 7.86 0.00 0.00 863201.98 36847.55 3005075.64 00:19:09.388 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x2000 length 0x2000 00:19:09.388 nvme1n1 : 5.45 258.14 16.13 0.00 0.00 463917.03 9790.92 481755.40 00:19:09.388 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x0 length 0xa000 00:19:09.388 nvme2n1 : 6.06 207.92 13.00 0.00 0.00 506733.55 14212.63 909608.10 00:19:09.388 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0xa000 length 0xa000 00:19:09.388 nvme2n1 : 5.46 210.82 13.18 0.00 0.00 563214.62 8527.58 1212810.80 00:19:09.388 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0x0 length 0xbd0b 00:19:09.388 nvme3n1 : 6.22 236.61 14.79 0.00 0.00 428399.52 671.15 2775989.15 00:19:09.388 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:09.388 Verification LBA range: start 0xbd0b length 0xbd0b 00:19:09.388 nvme3n1 : 5.47 327.51 20.47 0.00 0.00 357982.65 1802.90 458172.97 00:19:09.388 [2024-12-07T04:03:52.124Z] =================================================================================================================== 00:19:09.388 [2024-12-07T04:03:52.124Z] Total : 2438.37 152.40 0.00 0.00 571903.38 671.15 3005075.64 00:19:11.296 00:19:11.296 real 0m8.657s 00:19:11.296 user 0m15.714s 00:19:11.296 sys 0m0.589s 00:19:11.296 04:03:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:11.296 04:03:53 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:11.296 ************************************ 00:19:11.296 END TEST bdev_verify_big_io 00:19:11.296 ************************************ 00:19:11.296 04:03:53 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.296 04:03:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:11.296 04:03:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:11.296 04:03:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.296 ************************************ 00:19:11.296 START TEST bdev_write_zeroes 00:19:11.296 ************************************ 00:19:11.296 04:03:53 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.296 [2024-12-07 04:03:53.709929] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:11.296 [2024-12-07 04:03:53.710070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74706 ] 00:19:11.296 [2024-12-07 04:03:53.889752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.296 [2024-12-07 04:03:53.997878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.866 Running I/O for 1 seconds... 00:19:12.806 50688.00 IOPS, 198.00 MiB/s 00:19:12.806 Latency(us) 00:19:12.806 [2024-12-07T04:03:55.542Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.806 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.806 nvme0n1 : 1.03 7833.45 30.60 0.00 0.00 16325.40 8474.94 36847.55 00:19:12.806 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.806 nvme0n2 : 1.03 7822.78 30.56 0.00 0.00 16337.46 8527.58 35794.76 00:19:12.806 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.806 nvme0n3 : 1.03 7811.69 30.51 0.00 0.00 16349.64 8843.41 35163.09 00:19:12.806 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.806 nvme1n1 : 1.03 7804.12 30.48 0.00 0.00 16356.27 8896.05 33899.75 00:19:12.806 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.806 nvme2n1 : 1.03 7796.95 30.46 0.00 0.00 16361.40 8843.41 32425.84 00:19:12.806 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:12.807 nvme3n1 : 1.03 10833.26 42.32 0.00 0.00 11765.07 5184.98 37058.11 00:19:12.807 [2024-12-07T04:03:55.543Z] =================================================================================================================== 00:19:12.807 [2024-12-07T04:03:55.543Z] Total : 49902.25 194.93 0.00 0.00 15354.63 5184.98 37058.11 00:19:14.184 00:19:14.184 real 0m2.990s 00:19:14.184 user 0m2.226s 00:19:14.184 sys 0m0.569s 00:19:14.184 04:03:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.184 04:03:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:14.184 ************************************ 00:19:14.185 END TEST bdev_write_zeroes 00:19:14.185 ************************************ 00:19:14.185 04:03:56 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.185 04:03:56 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:14.185 04:03:56 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.185 04:03:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.185 ************************************ 00:19:14.185 START TEST bdev_json_nonenclosed 00:19:14.185 ************************************ 00:19:14.185 04:03:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.185 [2024-12-07 04:03:56.776263] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:14.185 [2024-12-07 04:03:56.776370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74761 ] 00:19:14.444 [2024-12-07 04:03:56.955026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.444 [2024-12-07 04:03:57.061499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:14.444 [2024-12-07 04:03:57.061601] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:14.444 [2024-12-07 04:03:57.061623] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:14.444 [2024-12-07 04:03:57.061636] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:14.703 00:19:14.703 real 0m0.622s 00:19:14.703 user 0m0.366s 00:19:14.703 sys 0m0.151s 00:19:14.703 04:03:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.703 04:03:57 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:14.703 ************************************ 00:19:14.703 END TEST bdev_json_nonenclosed 00:19:14.703 ************************************ 00:19:14.703 04:03:57 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.703 04:03:57 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:14.703 04:03:57 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.703 04:03:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.703 ************************************ 00:19:14.703 START TEST bdev_json_nonarray 00:19:14.703 ************************************ 00:19:14.703 04:03:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:14.962 [2024-12-07 04:03:57.483780] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:14.962 [2024-12-07 04:03:57.483911] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74791 ] 00:19:14.962 [2024-12-07 04:03:57.663661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.222 [2024-12-07 04:03:57.770874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.222 [2024-12-07 04:03:57.770999] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:15.222 [2024-12-07 04:03:57.771023] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:15.222 [2024-12-07 04:03:57.771035] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:15.481 00:19:15.481 real 0m0.634s 00:19:15.481 user 0m0.389s 00:19:15.481 sys 0m0.140s 00:19:15.481 04:03:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.481 04:03:58 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:15.481 ************************************ 00:19:15.481 END TEST bdev_json_nonarray 00:19:15.481 ************************************ 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:19:15.481 04:03:58 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:16.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:16.989 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:19:16.989 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:19:17.248 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:19:17.507 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:19:17.507 ************************************ 00:19:17.507 END TEST blockdev_xnvme 00:19:17.507 ************************************ 00:19:17.507 00:19:17.507 real 0m56.527s 00:19:17.507 user 1m34.900s 00:19:17.507 sys 0m30.195s 00:19:17.507 04:04:00 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:17.507 04:04:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.507 04:04:00 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:17.507 04:04:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.507 04:04:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.507 04:04:00 -- common/autotest_common.sh@10 -- # set +x 00:19:17.507 ************************************ 00:19:17.507 START TEST ublk 00:19:17.507 ************************************ 00:19:17.507 04:04:00 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:19:17.767 * Looking for test storage... 00:19:17.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:17.767 04:04:00 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:17.767 04:04:00 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:19:17.767 04:04:00 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:19:17.767 04:04:00 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:19:17.767 04:04:00 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:17.767 04:04:00 ublk -- scripts/common.sh@344 -- # case "$op" in 00:19:17.767 04:04:00 ublk -- scripts/common.sh@345 -- # : 1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:17.767 04:04:00 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:17.767 04:04:00 ublk -- scripts/common.sh@365 -- # decimal 1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@353 -- # local d=1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:17.767 04:04:00 ublk -- scripts/common.sh@355 -- # echo 1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:19:17.767 04:04:00 ublk -- scripts/common.sh@366 -- # decimal 2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@353 -- # local d=2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:17.767 04:04:00 ublk -- scripts/common.sh@355 -- # echo 2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:19:17.767 04:04:00 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:17.767 04:04:00 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:17.767 04:04:00 ublk -- scripts/common.sh@368 -- # return 0 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.767 --rc genhtml_branch_coverage=1 00:19:17.767 --rc genhtml_function_coverage=1 00:19:17.767 --rc genhtml_legend=1 00:19:17.767 --rc geninfo_all_blocks=1 00:19:17.767 --rc geninfo_unexecuted_blocks=1 00:19:17.767 00:19:17.767 ' 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.767 --rc genhtml_branch_coverage=1 00:19:17.767 --rc genhtml_function_coverage=1 00:19:17.767 --rc genhtml_legend=1 00:19:17.767 --rc geninfo_all_blocks=1 00:19:17.767 --rc geninfo_unexecuted_blocks=1 00:19:17.767 00:19:17.767 ' 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.767 --rc genhtml_branch_coverage=1 00:19:17.767 --rc genhtml_function_coverage=1 00:19:17.767 --rc genhtml_legend=1 00:19:17.767 --rc geninfo_all_blocks=1 00:19:17.767 --rc geninfo_unexecuted_blocks=1 00:19:17.767 00:19:17.767 ' 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:17.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:17.767 --rc genhtml_branch_coverage=1 00:19:17.767 --rc genhtml_function_coverage=1 00:19:17.767 --rc genhtml_legend=1 00:19:17.767 --rc geninfo_all_blocks=1 00:19:17.767 --rc geninfo_unexecuted_blocks=1 00:19:17.767 00:19:17.767 ' 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:17.767 04:04:00 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:17.767 04:04:00 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:17.767 04:04:00 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:17.767 04:04:00 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:17.767 04:04:00 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:17.767 04:04:00 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:17.767 04:04:00 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:17.767 04:04:00 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:19:17.767 04:04:00 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:17.767 04:04:00 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:17.767 ************************************ 00:19:17.767 START TEST test_save_ublk_config 00:19:17.767 ************************************ 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75087 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75087 00:19:17.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75087 ']' 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.767 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.768 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.768 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.768 04:04:00 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:18.027 [2024-12-07 04:04:00.545304] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:18.027 [2024-12-07 04:04:00.545441] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75087 ] 00:19:18.027 [2024-12-07 04:04:00.725462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:18.286 [2024-12-07 04:04:00.829148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:19.223 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:19.224 [2024-12-07 04:04:01.714982] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:19.224 [2024-12-07 04:04:01.716195] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:19.224 malloc0 00:19:19.224 [2024-12-07 04:04:01.803098] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:19.224 [2024-12-07 04:04:01.803214] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:19.224 [2024-12-07 04:04:01.803227] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:19.224 [2024-12-07 04:04:01.803236] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:19.224 [2024-12-07 04:04:01.812065] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:19.224 [2024-12-07 04:04:01.812092] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:19.224 [2024-12-07 04:04:01.818967] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:19.224 [2024-12-07 04:04:01.819070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:19.224 [2024-12-07 04:04:01.835964] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:19.224 0 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.224 04:04:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:19.483 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.483 04:04:02 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:19:19.483 "subsystems": [ 00:19:19.483 { 00:19:19.483 "subsystem": "fsdev", 00:19:19.483 "config": [ 00:19:19.483 { 00:19:19.483 "method": "fsdev_set_opts", 00:19:19.483 "params": { 00:19:19.483 "fsdev_io_pool_size": 65535, 00:19:19.483 "fsdev_io_cache_size": 256 00:19:19.483 } 00:19:19.483 } 00:19:19.483 ] 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "subsystem": "keyring", 00:19:19.483 "config": [] 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "subsystem": "iobuf", 00:19:19.483 "config": [ 00:19:19.483 { 00:19:19.483 "method": "iobuf_set_options", 00:19:19.483 "params": { 00:19:19.483 "small_pool_count": 8192, 00:19:19.483 "large_pool_count": 1024, 00:19:19.483 "small_bufsize": 8192, 00:19:19.483 "large_bufsize": 135168, 00:19:19.483 "enable_numa": false 00:19:19.483 } 00:19:19.483 } 00:19:19.483 ] 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "subsystem": "sock", 00:19:19.483 "config": [ 00:19:19.483 { 00:19:19.483 "method": "sock_set_default_impl", 00:19:19.483 "params": { 00:19:19.483 "impl_name": "posix" 00:19:19.483 } 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "method": "sock_impl_set_options", 00:19:19.483 "params": { 00:19:19.483 "impl_name": "ssl", 00:19:19.483 "recv_buf_size": 4096, 00:19:19.483 "send_buf_size": 4096, 00:19:19.483 "enable_recv_pipe": true, 00:19:19.483 "enable_quickack": false, 00:19:19.483 "enable_placement_id": 0, 00:19:19.483 "enable_zerocopy_send_server": true, 00:19:19.483 "enable_zerocopy_send_client": false, 00:19:19.483 "zerocopy_threshold": 0, 00:19:19.483 "tls_version": 0, 00:19:19.483 "enable_ktls": false 00:19:19.483 } 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "method": "sock_impl_set_options", 00:19:19.483 "params": { 00:19:19.483 "impl_name": "posix", 00:19:19.483 "recv_buf_size": 2097152, 00:19:19.483 "send_buf_size": 2097152, 00:19:19.483 "enable_recv_pipe": true, 00:19:19.483 "enable_quickack": false, 00:19:19.483 "enable_placement_id": 0, 00:19:19.483 "enable_zerocopy_send_server": true, 00:19:19.483 "enable_zerocopy_send_client": false, 00:19:19.483 "zerocopy_threshold": 0, 00:19:19.483 "tls_version": 0, 00:19:19.483 "enable_ktls": false 00:19:19.483 } 00:19:19.483 } 00:19:19.483 ] 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "subsystem": "vmd", 00:19:19.483 "config": [] 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "subsystem": "accel", 00:19:19.483 "config": [ 00:19:19.483 { 00:19:19.483 "method": "accel_set_options", 00:19:19.483 "params": { 00:19:19.483 "small_cache_size": 128, 00:19:19.483 "large_cache_size": 16, 00:19:19.483 "task_count": 2048, 00:19:19.483 "sequence_count": 2048, 00:19:19.483 "buf_count": 2048 00:19:19.483 } 00:19:19.483 } 00:19:19.483 ] 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "subsystem": "bdev", 00:19:19.483 "config": [ 00:19:19.483 { 00:19:19.483 "method": "bdev_set_options", 00:19:19.483 "params": { 00:19:19.483 "bdev_io_pool_size": 65535, 00:19:19.483 "bdev_io_cache_size": 256, 00:19:19.483 "bdev_auto_examine": true, 00:19:19.483 "iobuf_small_cache_size": 128, 00:19:19.483 "iobuf_large_cache_size": 16 00:19:19.483 } 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "method": "bdev_raid_set_options", 00:19:19.483 "params": { 00:19:19.483 "process_window_size_kb": 1024, 00:19:19.483 "process_max_bandwidth_mb_sec": 0 00:19:19.483 } 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "method": "bdev_iscsi_set_options", 00:19:19.483 "params": { 00:19:19.483 "timeout_sec": 30 00:19:19.483 } 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "method": "bdev_nvme_set_options", 00:19:19.483 "params": { 00:19:19.483 "action_on_timeout": "none", 00:19:19.483 "timeout_us": 0, 00:19:19.483 "timeout_admin_us": 0, 00:19:19.483 "keep_alive_timeout_ms": 10000, 00:19:19.483 "arbitration_burst": 0, 00:19:19.483 "low_priority_weight": 0, 00:19:19.483 "medium_priority_weight": 0, 00:19:19.483 "high_priority_weight": 0, 00:19:19.483 "nvme_adminq_poll_period_us": 10000, 00:19:19.483 "nvme_ioq_poll_period_us": 0, 00:19:19.483 "io_queue_requests": 0, 00:19:19.483 "delay_cmd_submit": true, 00:19:19.483 "transport_retry_count": 4, 00:19:19.483 "bdev_retry_count": 3, 00:19:19.483 "transport_ack_timeout": 0, 00:19:19.483 "ctrlr_loss_timeout_sec": 0, 00:19:19.483 "reconnect_delay_sec": 0, 00:19:19.483 "fast_io_fail_timeout_sec": 0, 00:19:19.483 "disable_auto_failback": false, 00:19:19.483 "generate_uuids": false, 00:19:19.483 "transport_tos": 0, 00:19:19.483 "nvme_error_stat": false, 00:19:19.483 "rdma_srq_size": 0, 00:19:19.483 "io_path_stat": false, 00:19:19.483 "allow_accel_sequence": false, 00:19:19.483 "rdma_max_cq_size": 0, 00:19:19.483 "rdma_cm_event_timeout_ms": 0, 00:19:19.483 "dhchap_digests": [ 00:19:19.483 "sha256", 00:19:19.483 "sha384", 00:19:19.483 "sha512" 00:19:19.483 ], 00:19:19.483 "dhchap_dhgroups": [ 00:19:19.483 "null", 00:19:19.483 "ffdhe2048", 00:19:19.483 "ffdhe3072", 00:19:19.483 "ffdhe4096", 00:19:19.483 "ffdhe6144", 00:19:19.483 "ffdhe8192" 00:19:19.483 ] 00:19:19.483 } 00:19:19.483 }, 00:19:19.483 { 00:19:19.483 "method": "bdev_nvme_set_hotplug", 00:19:19.483 "params": { 00:19:19.483 "period_us": 100000, 00:19:19.484 "enable": false 00:19:19.484 } 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "method": "bdev_malloc_create", 00:19:19.484 "params": { 00:19:19.484 "name": "malloc0", 00:19:19.484 "num_blocks": 8192, 00:19:19.484 "block_size": 4096, 00:19:19.484 "physical_block_size": 4096, 00:19:19.484 "uuid": "afa1b2f1-a745-486d-b9da-ae1d84091f73", 00:19:19.484 "optimal_io_boundary": 0, 00:19:19.484 "md_size": 0, 00:19:19.484 "dif_type": 0, 00:19:19.484 "dif_is_head_of_md": false, 00:19:19.484 "dif_pi_format": 0 00:19:19.484 } 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "method": "bdev_wait_for_examine" 00:19:19.484 } 00:19:19.484 ] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "scsi", 00:19:19.484 "config": null 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "scheduler", 00:19:19.484 "config": [ 00:19:19.484 { 00:19:19.484 "method": "framework_set_scheduler", 00:19:19.484 "params": { 00:19:19.484 "name": "static" 00:19:19.484 } 00:19:19.484 } 00:19:19.484 ] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "vhost_scsi", 00:19:19.484 "config": [] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "vhost_blk", 00:19:19.484 "config": [] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "ublk", 00:19:19.484 "config": [ 00:19:19.484 { 00:19:19.484 "method": "ublk_create_target", 00:19:19.484 "params": { 00:19:19.484 "cpumask": "1" 00:19:19.484 } 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "method": "ublk_start_disk", 00:19:19.484 "params": { 00:19:19.484 "bdev_name": "malloc0", 00:19:19.484 "ublk_id": 0, 00:19:19.484 "num_queues": 1, 00:19:19.484 "queue_depth": 128 00:19:19.484 } 00:19:19.484 } 00:19:19.484 ] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "nbd", 00:19:19.484 "config": [] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "nvmf", 00:19:19.484 "config": [ 00:19:19.484 { 00:19:19.484 "method": "nvmf_set_config", 00:19:19.484 "params": { 00:19:19.484 "discovery_filter": "match_any", 00:19:19.484 "admin_cmd_passthru": { 00:19:19.484 "identify_ctrlr": false 00:19:19.484 }, 00:19:19.484 "dhchap_digests": [ 00:19:19.484 "sha256", 00:19:19.484 "sha384", 00:19:19.484 "sha512" 00:19:19.484 ], 00:19:19.484 "dhchap_dhgroups": [ 00:19:19.484 "null", 00:19:19.484 "ffdhe2048", 00:19:19.484 "ffdhe3072", 00:19:19.484 "ffdhe4096", 00:19:19.484 "ffdhe6144", 00:19:19.484 "ffdhe8192" 00:19:19.484 ] 00:19:19.484 } 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "method": "nvmf_set_max_subsystems", 00:19:19.484 "params": { 00:19:19.484 "max_subsystems": 1024 00:19:19.484 } 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "method": "nvmf_set_crdt", 00:19:19.484 "params": { 00:19:19.484 "crdt1": 0, 00:19:19.484 "crdt2": 0, 00:19:19.484 "crdt3": 0 00:19:19.484 } 00:19:19.484 } 00:19:19.484 ] 00:19:19.484 }, 00:19:19.484 { 00:19:19.484 "subsystem": "iscsi", 00:19:19.484 "config": [ 00:19:19.484 { 00:19:19.484 "method": "iscsi_set_options", 00:19:19.484 "params": { 00:19:19.484 "node_base": "iqn.2016-06.io.spdk", 00:19:19.484 "max_sessions": 128, 00:19:19.484 "max_connections_per_session": 2, 00:19:19.484 "max_queue_depth": 64, 00:19:19.484 "default_time2wait": 2, 00:19:19.484 "default_time2retain": 20, 00:19:19.484 "first_burst_length": 8192, 00:19:19.484 "immediate_data": true, 00:19:19.484 "allow_duplicated_isid": false, 00:19:19.484 "error_recovery_level": 0, 00:19:19.484 "nop_timeout": 60, 00:19:19.484 "nop_in_interval": 30, 00:19:19.484 "disable_chap": false, 00:19:19.484 "require_chap": false, 00:19:19.484 "mutual_chap": false, 00:19:19.484 "chap_group": 0, 00:19:19.484 "max_large_datain_per_connection": 64, 00:19:19.484 "max_r2t_per_connection": 4, 00:19:19.484 "pdu_pool_size": 36864, 00:19:19.484 "immediate_data_pool_size": 16384, 00:19:19.484 "data_out_pool_size": 2048 00:19:19.484 } 00:19:19.484 } 00:19:19.484 ] 00:19:19.484 } 00:19:19.484 ] 00:19:19.484 }' 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75087 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75087 ']' 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75087 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75087 00:19:19.484 killing process with pid 75087 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75087' 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75087 00:19:19.484 04:04:02 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75087 00:19:20.862 [2024-12-07 04:04:03.578728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:21.121 [2024-12-07 04:04:03.633591] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:21.121 [2024-12-07 04:04:03.633746] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:21.121 [2024-12-07 04:04:03.643965] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:21.121 [2024-12-07 04:04:03.644048] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:21.121 [2024-12-07 04:04:03.644074] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:21.121 [2024-12-07 04:04:03.644112] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:21.121 [2024-12-07 04:04:03.644293] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75154 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75154 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75154 ']' 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:23.026 04:04:05 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:19:23.026 "subsystems": [ 00:19:23.026 { 00:19:23.026 "subsystem": "fsdev", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "fsdev_set_opts", 00:19:23.026 "params": { 00:19:23.026 "fsdev_io_pool_size": 65535, 00:19:23.026 "fsdev_io_cache_size": 256 00:19:23.026 } 00:19:23.026 } 00:19:23.026 ] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "keyring", 00:19:23.026 "config": [] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "iobuf", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "iobuf_set_options", 00:19:23.026 "params": { 00:19:23.026 "small_pool_count": 8192, 00:19:23.026 "large_pool_count": 1024, 00:19:23.026 "small_bufsize": 8192, 00:19:23.026 "large_bufsize": 135168, 00:19:23.026 "enable_numa": false 00:19:23.026 } 00:19:23.026 } 00:19:23.026 ] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "sock", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "sock_set_default_impl", 00:19:23.026 "params": { 00:19:23.026 "impl_name": "posix" 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "sock_impl_set_options", 00:19:23.026 "params": { 00:19:23.026 "impl_name": "ssl", 00:19:23.026 "recv_buf_size": 4096, 00:19:23.026 "send_buf_size": 4096, 00:19:23.026 "enable_recv_pipe": true, 00:19:23.026 "enable_quickack": false, 00:19:23.026 "enable_placement_id": 0, 00:19:23.026 "enable_zerocopy_send_server": true, 00:19:23.026 "enable_zerocopy_send_client": false, 00:19:23.026 "zerocopy_threshold": 0, 00:19:23.026 "tls_version": 0, 00:19:23.026 "enable_ktls": false 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "sock_impl_set_options", 00:19:23.026 "params": { 00:19:23.026 "impl_name": "posix", 00:19:23.026 "recv_buf_size": 2097152, 00:19:23.026 "send_buf_size": 2097152, 00:19:23.026 "enable_recv_pipe": true, 00:19:23.026 "enable_quickack": false, 00:19:23.026 "enable_placement_id": 0, 00:19:23.026 "enable_zerocopy_send_server": true, 00:19:23.026 "enable_zerocopy_send_client": false, 00:19:23.026 "zerocopy_threshold": 0, 00:19:23.026 "tls_version": 0, 00:19:23.026 "enable_ktls": false 00:19:23.026 } 00:19:23.026 } 00:19:23.026 ] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "vmd", 00:19:23.026 "config": [] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "accel", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "accel_set_options", 00:19:23.026 "params": { 00:19:23.026 "small_cache_size": 128, 00:19:23.026 "large_cache_size": 16, 00:19:23.026 "task_count": 2048, 00:19:23.026 "sequence_count": 2048, 00:19:23.026 "buf_count": 2048 00:19:23.026 } 00:19:23.026 } 00:19:23.026 ] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "bdev", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "bdev_set_options", 00:19:23.026 "params": { 00:19:23.026 "bdev_io_pool_size": 65535, 00:19:23.026 "bdev_io_cache_size": 256, 00:19:23.026 "bdev_auto_examine": true, 00:19:23.026 "iobuf_small_cache_size": 128, 00:19:23.026 "iobuf_large_cache_size": 16 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "bdev_raid_set_options", 00:19:23.026 "params": { 00:19:23.026 "process_window_size_kb": 1024, 00:19:23.026 "process_max_bandwidth_mb_sec": 0 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "bdev_iscsi_set_options", 00:19:23.026 "params": { 00:19:23.026 "timeout_sec": 30 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "bdev_nvme_set_options", 00:19:23.026 "params": { 00:19:23.026 "action_on_timeout": "none", 00:19:23.026 "timeout_us": 0, 00:19:23.026 "timeout_admin_us": 0, 00:19:23.026 "keep_alive_timeout_ms": 10000, 00:19:23.026 "arbitration_burst": 0, 00:19:23.026 "low_priority_weight": 0, 00:19:23.026 "medium_priority_weight": 0, 00:19:23.026 "high_priority_weight": 0, 00:19:23.026 "nvme_adminq_poll_period_us": 10000, 00:19:23.026 "nvme_ioq_poll_period_us": 0, 00:19:23.026 "io_queue_requests": 0, 00:19:23.026 "delay_cmd_submit": true, 00:19:23.026 "transport_retry_count": 4, 00:19:23.026 "bdev_retry_count": 3, 00:19:23.026 "transport_ack_timeout": 0, 00:19:23.026 "ctrlr_loss_timeout_sec": 0, 00:19:23.026 "reconnect_delay_sec": 0, 00:19:23.026 "fast_io_fail_timeout_sec": 0, 00:19:23.026 "disable_auto_failback": false, 00:19:23.026 "generate_uuids": false, 00:19:23.026 "transport_tos": 0, 00:19:23.026 "nvme_error_stat": false, 00:19:23.026 "rdma_srq_size": 0, 00:19:23.026 "io_path_stat": false, 00:19:23.026 "allow_accel_sequence": false, 00:19:23.026 "rdma_max_cq_size": 0, 00:19:23.026 "rdma_cm_event_timeout_ms": 0, 00:19:23.026 "dhchap_digests": [ 00:19:23.026 "sha256", 00:19:23.026 "sha384", 00:19:23.026 "sha512" 00:19:23.026 ], 00:19:23.026 "dhchap_dhgroups": [ 00:19:23.026 "null", 00:19:23.026 "ffdhe2048", 00:19:23.026 "ffdhe3072", 00:19:23.026 "ffdhe4096", 00:19:23.026 "ffdhe6144", 00:19:23.026 "ffdhe8192" 00:19:23.026 ] 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "bdev_nvme_set_hotplug", 00:19:23.026 "params": { 00:19:23.026 "period_us": 100000, 00:19:23.026 "enable": false 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "bdev_malloc_create", 00:19:23.026 "params": { 00:19:23.026 "name": "malloc0", 00:19:23.026 "num_blocks": 8192, 00:19:23.026 "block_size": 4096, 00:19:23.026 "physical_block_size": 4096, 00:19:23.026 "uuid": "afa1b2f1-a745-486d-b9da-ae1d84091f73", 00:19:23.026 "optimal_io_boundary": 0, 00:19:23.026 "md_size": 0, 00:19:23.026 "dif_type": 0, 00:19:23.026 "dif_is_head_of_md": false, 00:19:23.026 "dif_pi_format": 0 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "bdev_wait_for_examine" 00:19:23.026 } 00:19:23.026 ] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "scsi", 00:19:23.026 "config": null 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "scheduler", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "framework_set_scheduler", 00:19:23.026 "params": { 00:19:23.026 "name": "static" 00:19:23.026 } 00:19:23.026 } 00:19:23.026 ] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "vhost_scsi", 00:19:23.026 "config": [] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "vhost_blk", 00:19:23.026 "config": [] 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "subsystem": "ublk", 00:19:23.026 "config": [ 00:19:23.026 { 00:19:23.026 "method": "ublk_create_target", 00:19:23.026 "params": { 00:19:23.026 "cpumask": "1" 00:19:23.026 } 00:19:23.026 }, 00:19:23.026 { 00:19:23.026 "method": "ublk_start_disk", 00:19:23.027 "params": { 00:19:23.027 "bdev_name": "malloc0", 00:19:23.027 "ublk_id": 0, 00:19:23.027 "num_queues": 1, 00:19:23.027 "queue_depth": 128 00:19:23.027 } 00:19:23.027 } 00:19:23.027 ] 00:19:23.027 }, 00:19:23.027 { 00:19:23.027 "subsystem": "nbd", 00:19:23.027 "config": [] 00:19:23.027 }, 00:19:23.027 { 00:19:23.027 "subsystem": "nvmf", 00:19:23.027 "config": [ 00:19:23.027 { 00:19:23.027 "method": "nvmf_set_config", 00:19:23.027 "params": { 00:19:23.027 "discovery_filter": "match_any", 00:19:23.027 "admin_cmd_passthru": { 00:19:23.027 "identify_ctrlr": false 00:19:23.027 }, 00:19:23.027 "dhchap_digests": [ 00:19:23.027 "sha256", 00:19:23.027 "sha384", 00:19:23.027 "sha512" 00:19:23.027 ], 00:19:23.027 "dhchap_dhgroups": [ 00:19:23.027 "null", 00:19:23.027 "ffdhe2048", 00:19:23.027 "ffdhe3072", 00:19:23.027 "ffdhe4096", 00:19:23.027 "ffdhe6144", 00:19:23.027 "ffdhe8192" 00:19:23.027 ] 00:19:23.027 } 00:19:23.027 }, 00:19:23.027 { 00:19:23.027 "method": "nvmf_set_max_subsystems", 00:19:23.027 "params": { 00:19:23.027 "max_subsystems": 1024 00:19:23.027 } 00:19:23.027 }, 00:19:23.027 { 00:19:23.027 "method": "nvmf_set_crdt", 00:19:23.027 "params": { 00:19:23.027 "crdt1": 0, 00:19:23.027 "crdt2": 0, 00:19:23.027 "crdt3": 0 00:19:23.027 } 00:19:23.027 } 00:19:23.027 ] 00:19:23.027 }, 00:19:23.027 { 00:19:23.027 "subsystem": "iscsi", 00:19:23.027 "config": [ 00:19:23.027 { 00:19:23.027 "method": "iscsi_set_options", 00:19:23.027 "params": { 00:19:23.027 "node_base": "iqn.2016-06.io.spdk", 00:19:23.027 "max_sessions": 128, 00:19:23.027 "max_connections_per_session": 2, 00:19:23.027 "max_queue_depth": 64, 00:19:23.027 "default_time2wait": 2, 00:19:23.027 "default_time2retain": 20, 00:19:23.027 "first_burst_length": 8192, 00:19:23.027 "immediate_data": true, 00:19:23.027 "allow_duplicated_isid": false, 00:19:23.027 "error_recovery_level": 0, 00:19:23.027 "nop_timeout": 60, 00:19:23.027 "nop_in_interval": 30, 00:19:23.027 "disable_chap": false, 00:19:23.027 "require_chap": false, 00:19:23.027 "mutual_chap": false, 00:19:23.027 "chap_group": 0, 00:19:23.027 "max_large_datain_per_connection": 64, 00:19:23.027 "max_r2t_per_connection": 4, 00:19:23.027 "pdu_pool_size": 36864, 00:19:23.027 "immediate_data_pool_size": 16384, 00:19:23.027 "data_out_pool_size": 2048 00:19:23.027 } 00:19:23.027 } 00:19:23.027 ] 00:19:23.027 } 00:19:23.027 ] 00:19:23.027 }' 00:19:23.027 [2024-12-07 04:04:05.558369] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:23.027 [2024-12-07 04:04:05.558495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75154 ] 00:19:23.027 [2024-12-07 04:04:05.739999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.284 [2024-12-07 04:04:05.845810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.306 [2024-12-07 04:04:06.862963] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:24.306 [2024-12-07 04:04:06.864108] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:24.306 [2024-12-07 04:04:06.871094] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:19:24.306 [2024-12-07 04:04:06.871192] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:19:24.306 [2024-12-07 04:04:06.871204] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:24.306 [2024-12-07 04:04:06.871212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:24.306 [2024-12-07 04:04:06.880046] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:24.306 [2024-12-07 04:04:06.880070] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:24.306 [2024-12-07 04:04:06.886960] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:24.306 [2024-12-07 04:04:06.887058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:24.306 [2024-12-07 04:04:06.903947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.306 04:04:06 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75154 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75154 ']' 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75154 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.306 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75154 00:19:24.570 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.570 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.570 killing process with pid 75154 00:19:24.570 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75154' 00:19:24.570 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75154 00:19:24.570 04:04:07 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75154 00:19:25.948 [2024-12-07 04:04:08.543844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:25.948 [2024-12-07 04:04:08.574971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:25.948 [2024-12-07 04:04:08.575089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:25.948 [2024-12-07 04:04:08.583176] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:25.948 [2024-12-07 04:04:08.583253] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:25.948 [2024-12-07 04:04:08.583343] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:25.948 [2024-12-07 04:04:08.583400] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:25.948 [2024-12-07 04:04:08.583573] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:27.857 04:04:10 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:19:27.857 00:19:27.857 real 0m9.940s 00:19:27.857 user 0m7.538s 00:19:27.857 sys 0m3.117s 00:19:27.857 04:04:10 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.857 04:04:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:19:27.857 ************************************ 00:19:27.857 END TEST test_save_ublk_config 00:19:27.857 ************************************ 00:19:27.857 04:04:10 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75240 00:19:27.857 04:04:10 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:27.857 04:04:10 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.857 04:04:10 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75240 00:19:27.857 04:04:10 ublk -- common/autotest_common.sh@835 -- # '[' -z 75240 ']' 00:19:27.857 04:04:10 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.857 04:04:10 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.857 04:04:10 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.857 04:04:10 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.857 04:04:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:27.857 [2024-12-07 04:04:10.557047] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:27.857 [2024-12-07 04:04:10.557317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75240 ] 00:19:28.116 [2024-12-07 04:04:10.738353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.116 [2024-12-07 04:04:10.848012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.116 [2024-12-07 04:04:10.848046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.055 04:04:11 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.055 04:04:11 ublk -- common/autotest_common.sh@868 -- # return 0 00:19:29.055 04:04:11 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:19:29.055 04:04:11 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:29.055 04:04:11 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:29.055 04:04:11 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:29.055 ************************************ 00:19:29.055 START TEST test_create_ublk 00:19:29.055 ************************************ 00:19:29.055 04:04:11 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:19:29.055 04:04:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:19:29.055 04:04:11 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.055 04:04:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:29.055 [2024-12-07 04:04:11.728969] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:29.055 [2024-12-07 04:04:11.732231] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:29.055 04:04:11 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.055 04:04:11 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:19:29.055 04:04:11 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:19:29.055 04:04:11 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.055 04:04:11 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:29.623 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.623 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:19:29.623 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:29.623 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.623 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:29.623 [2024-12-07 04:04:12.070163] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:29.623 [2024-12-07 04:04:12.070676] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:29.623 [2024-12-07 04:04:12.070704] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:29.623 [2024-12-07 04:04:12.070714] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:29.623 [2024-12-07 04:04:12.079425] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:29.623 [2024-12-07 04:04:12.079459] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:29.623 [2024-12-07 04:04:12.085984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:29.623 [2024-12-07 04:04:12.086683] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:29.623 [2024-12-07 04:04:12.100074] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:29.623 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.623 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:19:29.623 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:19:29.623 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:19:29.623 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.624 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:29.624 04:04:12 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:19:29.624 { 00:19:29.624 "ublk_device": "/dev/ublkb0", 00:19:29.624 "id": 0, 00:19:29.624 "queue_depth": 512, 00:19:29.624 "num_queues": 4, 00:19:29.624 "bdev_name": "Malloc0" 00:19:29.624 } 00:19:29.624 ]' 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:19:29.624 04:04:12 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:19:29.883 fio: verification read phase will never start because write phase uses all of runtime 00:19:29.883 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:19:29.883 fio-3.35 00:19:29.883 Starting 1 process 00:19:39.866 00:19:39.866 fio_test: (groupid=0, jobs=1): err= 0: pid=75292: Sat Dec 7 04:04:22 2024 00:19:39.866 write: IOPS=6991, BW=27.3MiB/s (28.6MB/s)(273MiB/10001msec); 0 zone resets 00:19:39.866 clat (usec): min=36, max=4074, avg=142.18, stdev=102.64 00:19:39.866 lat (usec): min=36, max=4075, avg=142.65, stdev=102.64 00:19:39.866 clat percentiles (usec): 00:19:39.866 | 1.00th=[ 40], 5.00th=[ 43], 10.00th=[ 119], 20.00th=[ 135], 00:19:39.866 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 149], 00:19:39.866 | 70.00th=[ 153], 80.00th=[ 157], 90.00th=[ 161], 95.00th=[ 167], 00:19:39.866 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 2180], 99.95th=[ 2868], 00:19:39.866 | 99.99th=[ 3556] 00:19:39.866 bw ( KiB/s): min=26176, max=55504, per=100.00%, avg=28082.26, stdev=6683.47, samples=19 00:19:39.866 iops : min= 6544, max=13876, avg=7020.53, stdev=1670.88, samples=19 00:19:39.866 lat (usec) : 50=7.92%, 100=0.93%, 250=90.95%, 500=0.01%, 750=0.01% 00:19:39.866 lat (usec) : 1000=0.02% 00:19:39.866 lat (msec) : 2=0.05%, 4=0.11%, 10=0.01% 00:19:39.866 cpu : usr=1.28%, sys=4.67%, ctx=69921, majf=0, minf=794 00:19:39.866 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:39.866 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.866 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:39.866 issued rwts: total=0,69921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:39.866 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:39.866 00:19:39.866 Run status group 0 (all jobs): 00:19:39.866 WRITE: bw=27.3MiB/s (28.6MB/s), 27.3MiB/s-27.3MiB/s (28.6MB/s-28.6MB/s), io=273MiB (286MB), run=10001-10001msec 00:19:39.866 00:19:39.866 Disk stats (read/write): 00:19:39.866 ublkb0: ios=0/69286, merge=0/0, ticks=0/9333, in_queue=9333, util=99.13% 00:19:39.866 04:04:22 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:19:39.866 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.866 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:39.866 [2024-12-07 04:04:22.587751] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:40.125 [2024-12-07 04:04:22.623008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:40.125 [2024-12-07 04:04:22.623860] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:40.125 [2024-12-07 04:04:22.631005] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:40.125 [2024-12-07 04:04:22.631349] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:40.125 [2024-12-07 04:04:22.631366] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.125 04:04:22 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:40.125 [2024-12-07 04:04:22.655053] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:19:40.125 request: 00:19:40.125 { 00:19:40.125 "ublk_id": 0, 00:19:40.125 "method": "ublk_stop_disk", 00:19:40.125 "req_id": 1 00:19:40.125 } 00:19:40.125 Got JSON-RPC error response 00:19:40.125 response: 00:19:40.125 { 00:19:40.125 "code": -19, 00:19:40.125 "message": "No such device" 00:19:40.125 } 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:40.125 04:04:22 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:40.125 [2024-12-07 04:04:22.671065] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:40.125 [2024-12-07 04:04:22.678948] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:40.125 [2024-12-07 04:04:22.678997] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.125 04:04:22 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.125 04:04:22 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.063 04:04:23 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:19:41.063 ************************************ 00:19:41.063 END TEST test_create_ublk 00:19:41.063 ************************************ 00:19:41.063 04:04:23 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:41.063 00:19:41.063 real 0m11.856s 00:19:41.063 user 0m0.496s 00:19:41.063 sys 0m0.606s 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.063 04:04:23 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.063 04:04:23 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:19:41.063 04:04:23 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:41.063 04:04:23 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.063 04:04:23 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.063 ************************************ 00:19:41.063 START TEST test_create_multi_ublk 00:19:41.063 ************************************ 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.063 [2024-12-07 04:04:23.657952] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:41.063 [2024-12-07 04:04:23.660493] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.063 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.322 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.322 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:19:41.322 04:04:23 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:19:41.322 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.322 04:04:23 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.322 [2024-12-07 04:04:23.977150] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:19:41.322 [2024-12-07 04:04:23.977685] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:19:41.322 [2024-12-07 04:04:23.977708] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:19:41.322 [2024-12-07 04:04:23.977725] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:19:41.322 [2024-12-07 04:04:23.985056] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:41.322 [2024-12-07 04:04:23.985096] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:41.322 [2024-12-07 04:04:23.992971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:41.322 [2024-12-07 04:04:23.993588] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:19:41.322 [2024-12-07 04:04:24.002447] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:19:41.322 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.322 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:19:41.322 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:41.322 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:19:41.322 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.322 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:41.888 [2024-12-07 04:04:24.343146] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:19:41.888 [2024-12-07 04:04:24.343672] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:19:41.888 [2024-12-07 04:04:24.343698] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:41.888 [2024-12-07 04:04:24.343708] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:41.888 [2024-12-07 04:04:24.347699] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:41.888 [2024-12-07 04:04:24.347728] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:41.888 [2024-12-07 04:04:24.357976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:41.888 [2024-12-07 04:04:24.358618] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:41.888 [2024-12-07 04:04:24.381979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.888 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.147 [2024-12-07 04:04:24.729102] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:19:42.147 [2024-12-07 04:04:24.729627] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:19:42.147 [2024-12-07 04:04:24.729649] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:19:42.147 [2024-12-07 04:04:24.729663] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:19:42.147 [2024-12-07 04:04:24.738409] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:42.147 [2024-12-07 04:04:24.738446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:42.147 [2024-12-07 04:04:24.744980] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:42.147 [2024-12-07 04:04:24.745611] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:19:42.147 [2024-12-07 04:04:24.754011] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.147 04:04:24 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.406 [2024-12-07 04:04:25.089153] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:19:42.406 [2024-12-07 04:04:25.089657] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:19:42.406 [2024-12-07 04:04:25.089683] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:19:42.406 [2024-12-07 04:04:25.089693] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:19:42.406 [2024-12-07 04:04:25.101002] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:42.406 [2024-12-07 04:04:25.101032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:42.406 [2024-12-07 04:04:25.108977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:42.406 [2024-12-07 04:04:25.109608] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:19:42.406 [2024-12-07 04:04:25.117971] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.406 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:19:42.407 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:19:42.407 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.407 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:19:42.666 { 00:19:42.666 "ublk_device": "/dev/ublkb0", 00:19:42.666 "id": 0, 00:19:42.666 "queue_depth": 512, 00:19:42.666 "num_queues": 4, 00:19:42.666 "bdev_name": "Malloc0" 00:19:42.666 }, 00:19:42.666 { 00:19:42.666 "ublk_device": "/dev/ublkb1", 00:19:42.666 "id": 1, 00:19:42.666 "queue_depth": 512, 00:19:42.666 "num_queues": 4, 00:19:42.666 "bdev_name": "Malloc1" 00:19:42.666 }, 00:19:42.666 { 00:19:42.666 "ublk_device": "/dev/ublkb2", 00:19:42.666 "id": 2, 00:19:42.666 "queue_depth": 512, 00:19:42.666 "num_queues": 4, 00:19:42.666 "bdev_name": "Malloc2" 00:19:42.666 }, 00:19:42.666 { 00:19:42.666 "ublk_device": "/dev/ublkb3", 00:19:42.666 "id": 3, 00:19:42.666 "queue_depth": 512, 00:19:42.666 "num_queues": 4, 00:19:42.666 "bdev_name": "Malloc3" 00:19:42.666 } 00:19:42.666 ]' 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:42.666 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:19:42.924 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:19:43.181 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:19:43.181 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:19:43.181 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:43.181 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:19:43.182 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:19:43.440 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:19:43.440 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:19:43.440 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:19:43.440 04:04:25 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.440 [2024-12-07 04:04:26.019090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:19:43.440 [2024-12-07 04:04:26.067027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:43.440 [2024-12-07 04:04:26.068212] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:19:43.440 [2024-12-07 04:04:26.079030] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:43.440 [2024-12-07 04:04:26.079340] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:19:43.440 [2024-12-07 04:04:26.079361] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.440 [2024-12-07 04:04:26.089089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:43.440 [2024-12-07 04:04:26.116578] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:43.440 [2024-12-07 04:04:26.117847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:43.440 [2024-12-07 04:04:26.125989] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:43.440 [2024-12-07 04:04:26.126348] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:43.440 [2024-12-07 04:04:26.126372] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.440 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.440 [2024-12-07 04:04:26.140071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:19:43.699 [2024-12-07 04:04:26.185477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:43.699 [2024-12-07 04:04:26.186765] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:19:43.699 [2024-12-07 04:04:26.192977] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:43.699 [2024-12-07 04:04:26.193320] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:19:43.699 [2024-12-07 04:04:26.193341] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:43.699 [2024-12-07 04:04:26.206082] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:19:43.699 [2024-12-07 04:04:26.238547] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:43.699 [2024-12-07 04:04:26.239645] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:19:43.699 [2024-12-07 04:04:26.248983] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:43.699 [2024-12-07 04:04:26.249277] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:19:43.699 [2024-12-07 04:04:26.249299] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.699 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:19:43.958 [2024-12-07 04:04:26.451026] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:43.958 [2024-12-07 04:04:26.458959] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:43.958 [2024-12-07 04:04:26.458999] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:43.958 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:19:43.958 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:43.958 04:04:26 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:43.958 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.958 04:04:26 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:44.528 04:04:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.528 04:04:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:44.528 04:04:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:44.528 04:04:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.528 04:04:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.097 04:04:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.097 04:04:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.097 04:04:27 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:19:45.097 04:04:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.097 04:04:27 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.357 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.357 04:04:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:19:45.357 04:04:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:19:45.357 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.357 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:19:45.925 ************************************ 00:19:45.925 END TEST test_create_multi_ublk 00:19:45.925 ************************************ 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:19:45.925 00:19:45.925 real 0m4.879s 00:19:45.925 user 0m1.029s 00:19:45.925 sys 0m0.222s 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.925 04:04:28 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:19:45.925 04:04:28 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:19:45.925 04:04:28 ublk -- ublk/ublk.sh@147 -- # cleanup 00:19:45.925 04:04:28 ublk -- ublk/ublk.sh@130 -- # killprocess 75240 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@954 -- # '[' -z 75240 ']' 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@958 -- # kill -0 75240 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@959 -- # uname 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75240 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.925 killing process with pid 75240 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75240' 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@973 -- # kill 75240 00:19:45.925 04:04:28 ublk -- common/autotest_common.sh@978 -- # wait 75240 00:19:47.305 [2024-12-07 04:04:29.839339] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:47.305 [2024-12-07 04:04:29.839416] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:48.684 00:19:48.684 real 0m30.992s 00:19:48.684 user 0m45.342s 00:19:48.684 sys 0m8.990s 00:19:48.684 04:04:31 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:48.684 04:04:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:19:48.684 ************************************ 00:19:48.684 END TEST ublk 00:19:48.684 ************************************ 00:19:48.684 04:04:31 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:48.684 04:04:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:48.684 04:04:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:48.684 04:04:31 -- common/autotest_common.sh@10 -- # set +x 00:19:48.684 ************************************ 00:19:48.684 START TEST ublk_recovery 00:19:48.684 ************************************ 00:19:48.684 04:04:31 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:19:48.684 * Looking for test storage... 00:19:48.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:19:48.684 04:04:31 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:48.684 04:04:31 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:19:48.684 04:04:31 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:48.944 04:04:31 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:48.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.944 --rc genhtml_branch_coverage=1 00:19:48.944 --rc genhtml_function_coverage=1 00:19:48.944 --rc genhtml_legend=1 00:19:48.944 --rc geninfo_all_blocks=1 00:19:48.944 --rc geninfo_unexecuted_blocks=1 00:19:48.944 00:19:48.944 ' 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:48.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.944 --rc genhtml_branch_coverage=1 00:19:48.944 --rc genhtml_function_coverage=1 00:19:48.944 --rc genhtml_legend=1 00:19:48.944 --rc geninfo_all_blocks=1 00:19:48.944 --rc geninfo_unexecuted_blocks=1 00:19:48.944 00:19:48.944 ' 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:48.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.944 --rc genhtml_branch_coverage=1 00:19:48.944 --rc genhtml_function_coverage=1 00:19:48.944 --rc genhtml_legend=1 00:19:48.944 --rc geninfo_all_blocks=1 00:19:48.944 --rc geninfo_unexecuted_blocks=1 00:19:48.944 00:19:48.944 ' 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:48.944 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:48.944 --rc genhtml_branch_coverage=1 00:19:48.944 --rc genhtml_function_coverage=1 00:19:48.944 --rc genhtml_legend=1 00:19:48.944 --rc geninfo_all_blocks=1 00:19:48.944 --rc geninfo_unexecuted_blocks=1 00:19:48.944 00:19:48.944 ' 00:19:48.944 04:04:31 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:19:48.944 04:04:31 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:19:48.944 04:04:31 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:19:48.944 04:04:31 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75673 00:19:48.944 04:04:31 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:19:48.944 04:04:31 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.944 04:04:31 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75673 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75673 ']' 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.944 04:04:31 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:48.944 [2024-12-07 04:04:31.614072] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:19:48.944 [2024-12-07 04:04:31.614991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75673 ] 00:19:49.203 [2024-12-07 04:04:31.800496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:49.203 [2024-12-07 04:04:31.933249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.203 [2024-12-07 04:04:31.933273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:19:50.584 04:04:32 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.584 [2024-12-07 04:04:32.923959] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:19:50.584 [2024-12-07 04:04:32.927161] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.584 04:04:32 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.584 04:04:32 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.584 malloc0 00:19:50.584 04:04:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.584 04:04:33 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:19:50.584 04:04:33 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.584 04:04:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:50.584 [2024-12-07 04:04:33.100125] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:19:50.584 [2024-12-07 04:04:33.100267] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:19:50.584 [2024-12-07 04:04:33.100284] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:19:50.584 [2024-12-07 04:04:33.100295] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:19:50.584 [2024-12-07 04:04:33.109152] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:19:50.584 [2024-12-07 04:04:33.109184] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:19:50.584 [2024-12-07 04:04:33.115979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:19:50.584 [2024-12-07 04:04:33.116146] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:19:50.584 [2024-12-07 04:04:33.130976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:19:50.584 1 00:19:50.584 04:04:33 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.584 04:04:33 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:19:51.522 04:04:34 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75714 00:19:51.522 04:04:34 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:19:51.522 04:04:34 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:19:51.781 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:51.781 fio-3.35 00:19:51.781 Starting 1 process 00:19:57.055 04:04:39 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75673 00:19:57.056 04:04:39 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:20:02.335 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75673 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:20:02.335 04:04:44 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75819 00:20:02.335 04:04:44 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:20:02.335 04:04:44 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:02.335 04:04:44 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75819 00:20:02.335 04:04:44 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75819 ']' 00:20:02.335 04:04:44 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.335 04:04:44 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:02.335 04:04:44 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.335 04:04:44 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:02.335 04:04:44 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:02.335 [2024-12-07 04:04:44.273535] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:20:02.335 [2024-12-07 04:04:44.273664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75819 ] 00:20:02.335 [2024-12-07 04:04:44.457656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:02.335 [2024-12-07 04:04:44.586555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.335 [2024-12-07 04:04:44.586584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:20:02.899 04:04:45 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:02.899 [2024-12-07 04:04:45.508959] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:20:02.899 [2024-12-07 04:04:45.511763] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.899 04:04:45 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.899 04:04:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.157 malloc0 00:20:03.157 04:04:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.157 04:04:45 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:20:03.157 04:04:45 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.157 04:04:45 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:03.157 [2024-12-07 04:04:45.698132] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:20:03.157 [2024-12-07 04:04:45.698183] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:20:03.157 [2024-12-07 04:04:45.698196] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:03.157 1 00:20:03.157 04:04:45 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.158 04:04:45 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75714 00:20:03.158 [2024-12-07 04:04:45.706957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:03.158 [2024-12-07 04:04:45.706986] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:04.092 [2024-12-07 04:04:46.705421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:04.092 [2024-12-07 04:04:46.713964] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:04.092 [2024-12-07 04:04:46.713983] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:05.029 [2024-12-07 04:04:47.712402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:05.029 [2024-12-07 04:04:47.718974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:05.029 [2024-12-07 04:04:47.718997] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:06.030 [2024-12-07 04:04:48.717400] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:20:06.030 [2024-12-07 04:04:48.718957] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:20:06.030 [2024-12-07 04:04:48.718967] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:20:06.030 [2024-12-07 04:04:48.718980] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:20:06.030 [2024-12-07 04:04:48.719106] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:20:27.976 [2024-12-07 04:05:09.193974] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:20:27.976 [2024-12-07 04:05:09.198695] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:20:27.976 [2024-12-07 04:05:09.204277] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:20:27.976 [2024-12-07 04:05:09.204300] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:20:54.530 00:20:54.530 fio_test: (groupid=0, jobs=1): err= 0: pid=75717: Sat Dec 7 04:05:34 2024 00:20:54.530 read: IOPS=10.9k, BW=42.5MiB/s (44.5MB/s)(2548MiB/60002msec) 00:20:54.530 slat (usec): min=2, max=488, avg= 8.84, stdev= 2.36 00:20:54.530 clat (usec): min=1283, max=30066k, avg=6061.82, stdev=309165.19 00:20:54.530 lat (usec): min=1291, max=30066k, avg=6070.66, stdev=309165.21 00:20:54.530 clat percentiles (msec): 00:20:54.530 | 1.00th=[ 3], 5.00th=[ 3], 10.00th=[ 3], 20.00th=[ 3], 00:20:54.530 | 30.00th=[ 3], 40.00th=[ 3], 50.00th=[ 3], 60.00th=[ 3], 00:20:54.530 | 70.00th=[ 3], 80.00th=[ 3], 90.00th=[ 4], 95.00th=[ 5], 00:20:54.530 | 99.00th=[ 6], 99.50th=[ 7], 99.90th=[ 8], 99.95th=[ 10], 00:20:54.530 | 99.99th=[17113] 00:20:54.530 bw ( KiB/s): min=18384, max=90712, per=100.00%, avg=85642.27, stdev=11969.75, samples=60 00:20:54.530 iops : min= 4596, max=22678, avg=21410.53, stdev=2992.42, samples=60 00:20:54.530 write: IOPS=10.9k, BW=42.4MiB/s (44.5MB/s)(2546MiB/60002msec); 0 zone resets 00:20:54.530 slat (usec): min=3, max=906, avg= 9.15, stdev= 2.82 00:20:54.530 clat (usec): min=1348, max=30067k, avg=5697.09, stdev=286003.88 00:20:54.530 lat (usec): min=1364, max=30067k, avg=5706.24, stdev=286003.89 00:20:54.530 clat percentiles (usec): 00:20:54.530 | 1.00th=[ 2278], 5.00th=[ 2442], 10.00th=[ 2606], 20.00th=[ 2737], 00:20:54.530 | 30.00th=[ 2802], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:20:54.530 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3359], 95.00th=[ 4178], 00:20:54.530 | 99.00th=[ 5800], 99.50th=[ 6521], 99.90th=[ 7767], 99.95th=[ 9241], 00:20:54.530 | 99.99th=[13960] 00:20:54.530 bw ( KiB/s): min=18344, max=90416, per=100.00%, avg=85592.00, stdev=11973.36, samples=60 00:20:54.530 iops : min= 4586, max=22604, avg=21398.00, stdev=2993.34, samples=60 00:20:54.530 lat (msec) : 2=0.12%, 4=94.06%, 10=5.80%, 20=0.02%, >=2000=0.01% 00:20:54.530 cpu : usr=6.08%, sys=19.66%, ctx=56942, majf=0, minf=13 00:20:54.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:20:54.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:54.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:54.530 issued rwts: total=652313,651787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:54.530 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:54.530 00:20:54.530 Run status group 0 (all jobs): 00:20:54.530 READ: bw=42.5MiB/s (44.5MB/s), 42.5MiB/s-42.5MiB/s (44.5MB/s-44.5MB/s), io=2548MiB (2672MB), run=60002-60002msec 00:20:54.530 WRITE: bw=42.4MiB/s (44.5MB/s), 42.4MiB/s-42.4MiB/s (44.5MB/s-44.5MB/s), io=2546MiB (2670MB), run=60002-60002msec 00:20:54.530 00:20:54.530 Disk stats (read/write): 00:20:54.530 ublkb1: ios=649886/649305, merge=0/0, ticks=3881060/3564898, in_queue=7445958, util=99.92% 00:20:54.530 04:05:34 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.530 [2024-12-07 04:05:34.426540] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:20:54.530 [2024-12-07 04:05:34.467949] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:20:54.530 [2024-12-07 04:05:34.468357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:20:54.530 [2024-12-07 04:05:34.475968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:20:54.530 [2024-12-07 04:05:34.476107] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:20:54.530 [2024-12-07 04:05:34.476117] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.530 04:05:34 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:54.530 [2024-12-07 04:05:34.492074] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:54.530 [2024-12-07 04:05:34.499963] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:54.530 [2024-12-07 04:05:34.500004] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.530 04:05:34 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:20:54.530 04:05:34 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:20:54.530 04:05:34 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75819 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75819 ']' 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75819 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75819 00:20:54.530 killing process with pid 75819 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75819' 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75819 00:20:54.530 04:05:34 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75819 00:20:54.530 [2024-12-07 04:05:36.201367] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:20:54.530 [2024-12-07 04:05:36.201438] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:20:55.100 ************************************ 00:20:55.100 END TEST ublk_recovery 00:20:55.100 ************************************ 00:20:55.100 00:20:55.100 real 1m6.451s 00:20:55.100 user 1m50.291s 00:20:55.100 sys 0m27.090s 00:20:55.100 04:05:37 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.100 04:05:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:20:55.100 04:05:37 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:20:55.100 04:05:37 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@260 -- # timing_exit lib 00:20:55.100 04:05:37 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:55.100 04:05:37 -- common/autotest_common.sh@10 -- # set +x 00:20:55.100 04:05:37 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:20:55.100 04:05:37 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:55.100 04:05:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.100 04:05:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.100 04:05:37 -- common/autotest_common.sh@10 -- # set +x 00:20:55.360 ************************************ 00:20:55.360 START TEST ftl 00:20:55.360 ************************************ 00:20:55.360 04:05:37 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:55.360 * Looking for test storage... 00:20:55.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:55.360 04:05:37 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:55.360 04:05:37 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:20:55.360 04:05:37 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:55.360 04:05:38 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.360 04:05:38 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.360 04:05:38 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.360 04:05:38 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.360 04:05:38 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.360 04:05:38 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.360 04:05:38 ftl -- scripts/common.sh@344 -- # case "$op" in 00:20:55.360 04:05:38 ftl -- scripts/common.sh@345 -- # : 1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.360 04:05:38 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.360 04:05:38 ftl -- scripts/common.sh@365 -- # decimal 1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@353 -- # local d=1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.360 04:05:38 ftl -- scripts/common.sh@355 -- # echo 1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.360 04:05:38 ftl -- scripts/common.sh@366 -- # decimal 2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@353 -- # local d=2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.360 04:05:38 ftl -- scripts/common.sh@355 -- # echo 2 00:20:55.360 04:05:38 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.361 04:05:38 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.361 04:05:38 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.361 04:05:38 ftl -- scripts/common.sh@368 -- # return 0 00:20:55.361 04:05:38 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.361 04:05:38 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.361 --rc genhtml_branch_coverage=1 00:20:55.361 --rc genhtml_function_coverage=1 00:20:55.361 --rc genhtml_legend=1 00:20:55.361 --rc geninfo_all_blocks=1 00:20:55.361 --rc geninfo_unexecuted_blocks=1 00:20:55.361 00:20:55.361 ' 00:20:55.361 04:05:38 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.361 --rc genhtml_branch_coverage=1 00:20:55.361 --rc genhtml_function_coverage=1 00:20:55.361 --rc genhtml_legend=1 00:20:55.361 --rc geninfo_all_blocks=1 00:20:55.361 --rc geninfo_unexecuted_blocks=1 00:20:55.361 00:20:55.361 ' 00:20:55.361 04:05:38 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.361 --rc genhtml_branch_coverage=1 00:20:55.361 --rc genhtml_function_coverage=1 00:20:55.361 --rc genhtml_legend=1 00:20:55.361 --rc geninfo_all_blocks=1 00:20:55.361 --rc geninfo_unexecuted_blocks=1 00:20:55.361 00:20:55.361 ' 00:20:55.361 04:05:38 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:55.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.361 --rc genhtml_branch_coverage=1 00:20:55.361 --rc genhtml_function_coverage=1 00:20:55.361 --rc genhtml_legend=1 00:20:55.361 --rc geninfo_all_blocks=1 00:20:55.361 --rc geninfo_unexecuted_blocks=1 00:20:55.361 00:20:55.361 ' 00:20:55.361 04:05:38 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:55.361 04:05:38 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:20:55.361 04:05:38 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:55.361 04:05:38 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:55.361 04:05:38 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:55.621 04:05:38 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:55.621 04:05:38 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.621 04:05:38 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:55.621 04:05:38 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:55.621 04:05:38 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.621 04:05:38 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.621 04:05:38 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:55.621 04:05:38 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:55.621 04:05:38 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:55.621 04:05:38 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:55.621 04:05:38 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:55.621 04:05:38 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:55.621 04:05:38 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.621 04:05:38 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:55.621 04:05:38 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:55.621 04:05:38 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:55.621 04:05:38 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:55.621 04:05:38 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:55.621 04:05:38 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:55.621 04:05:38 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:55.621 04:05:38 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:55.621 04:05:38 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:55.621 04:05:38 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:55.621 04:05:38 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:55.621 04:05:38 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.621 04:05:38 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:20:55.621 04:05:38 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:20:55.621 04:05:38 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:20:55.621 04:05:38 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:20:55.621 04:05:38 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:56.192 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.192 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.192 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.192 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.192 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:20:56.452 04:05:38 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76630 00:20:56.452 04:05:38 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:20:56.452 04:05:38 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76630 00:20:56.452 04:05:38 ftl -- common/autotest_common.sh@835 -- # '[' -z 76630 ']' 00:20:56.452 04:05:38 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.452 04:05:38 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:56.452 04:05:38 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.452 04:05:38 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:56.452 04:05:38 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:56.452 [2024-12-07 04:05:39.084643] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:20:56.452 [2024-12-07 04:05:39.084984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76630 ] 00:20:56.712 [2024-12-07 04:05:39.268219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:56.712 [2024-12-07 04:05:39.373986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.282 04:05:39 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:57.282 04:05:39 ftl -- common/autotest_common.sh@868 -- # return 0 00:20:57.282 04:05:39 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:20:57.541 04:05:40 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:20:58.491 04:05:41 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:20:58.491 04:05:41 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@50 -- # break 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:20:59.060 04:05:41 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:20:59.319 04:05:41 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:20:59.319 04:05:41 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:20:59.319 04:05:41 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:20:59.319 04:05:41 ftl -- ftl/ftl.sh@63 -- # break 00:20:59.319 04:05:41 ftl -- ftl/ftl.sh@66 -- # killprocess 76630 00:20:59.319 04:05:41 ftl -- common/autotest_common.sh@954 -- # '[' -z 76630 ']' 00:20:59.319 04:05:41 ftl -- common/autotest_common.sh@958 -- # kill -0 76630 00:20:59.319 04:05:41 ftl -- common/autotest_common.sh@959 -- # uname 00:20:59.319 04:05:42 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:59.319 04:05:42 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76630 00:20:59.319 killing process with pid 76630 00:20:59.319 04:05:42 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:59.319 04:05:42 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:59.319 04:05:42 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76630' 00:20:59.319 04:05:42 ftl -- common/autotest_common.sh@973 -- # kill 76630 00:20:59.320 04:05:42 ftl -- common/autotest_common.sh@978 -- # wait 76630 00:21:01.868 04:05:44 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:21:01.868 04:05:44 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:01.868 04:05:44 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:01.868 04:05:44 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:01.868 04:05:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:01.868 ************************************ 00:21:01.868 START TEST ftl_fio_basic 00:21:01.868 ************************************ 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:21:01.868 * Looking for test storage... 00:21:01.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:01.868 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:01.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.869 --rc genhtml_branch_coverage=1 00:21:01.869 --rc genhtml_function_coverage=1 00:21:01.869 --rc genhtml_legend=1 00:21:01.869 --rc geninfo_all_blocks=1 00:21:01.869 --rc geninfo_unexecuted_blocks=1 00:21:01.869 00:21:01.869 ' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:01.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.869 --rc genhtml_branch_coverage=1 00:21:01.869 --rc genhtml_function_coverage=1 00:21:01.869 --rc genhtml_legend=1 00:21:01.869 --rc geninfo_all_blocks=1 00:21:01.869 --rc geninfo_unexecuted_blocks=1 00:21:01.869 00:21:01.869 ' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:01.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.869 --rc genhtml_branch_coverage=1 00:21:01.869 --rc genhtml_function_coverage=1 00:21:01.869 --rc genhtml_legend=1 00:21:01.869 --rc geninfo_all_blocks=1 00:21:01.869 --rc geninfo_unexecuted_blocks=1 00:21:01.869 00:21:01.869 ' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:01.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:01.869 --rc genhtml_branch_coverage=1 00:21:01.869 --rc genhtml_function_coverage=1 00:21:01.869 --rc genhtml_legend=1 00:21:01.869 --rc geninfo_all_blocks=1 00:21:01.869 --rc geninfo_unexecuted_blocks=1 00:21:01.869 00:21:01.869 ' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:01.869 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:21:02.129 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:21:02.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76779 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76779 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76779 ']' 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.130 04:05:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:02.130 [2024-12-07 04:05:44.716464] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:21:02.130 [2024-12-07 04:05:44.716746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76779 ] 00:21:02.389 [2024-12-07 04:05:44.898440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:02.389 [2024-12-07 04:05:45.004665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.389 [2024-12-07 04:05:45.004817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.389 [2024-12-07 04:05:45.004848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:21:03.324 04:05:45 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:03.582 { 00:21:03.582 "name": "nvme0n1", 00:21:03.582 "aliases": [ 00:21:03.582 "c1efcc0c-01d3-42be-b018-549ff7426bad" 00:21:03.582 ], 00:21:03.582 "product_name": "NVMe disk", 00:21:03.582 "block_size": 4096, 00:21:03.582 "num_blocks": 1310720, 00:21:03.582 "uuid": "c1efcc0c-01d3-42be-b018-549ff7426bad", 00:21:03.582 "numa_id": -1, 00:21:03.582 "assigned_rate_limits": { 00:21:03.582 "rw_ios_per_sec": 0, 00:21:03.582 "rw_mbytes_per_sec": 0, 00:21:03.582 "r_mbytes_per_sec": 0, 00:21:03.582 "w_mbytes_per_sec": 0 00:21:03.582 }, 00:21:03.582 "claimed": false, 00:21:03.582 "zoned": false, 00:21:03.582 "supported_io_types": { 00:21:03.582 "read": true, 00:21:03.582 "write": true, 00:21:03.582 "unmap": true, 00:21:03.582 "flush": true, 00:21:03.582 "reset": true, 00:21:03.582 "nvme_admin": true, 00:21:03.582 "nvme_io": true, 00:21:03.582 "nvme_io_md": false, 00:21:03.582 "write_zeroes": true, 00:21:03.582 "zcopy": false, 00:21:03.582 "get_zone_info": false, 00:21:03.582 "zone_management": false, 00:21:03.582 "zone_append": false, 00:21:03.582 "compare": true, 00:21:03.582 "compare_and_write": false, 00:21:03.582 "abort": true, 00:21:03.582 "seek_hole": false, 00:21:03.582 "seek_data": false, 00:21:03.582 "copy": true, 00:21:03.582 "nvme_iov_md": false 00:21:03.582 }, 00:21:03.582 "driver_specific": { 00:21:03.582 "nvme": [ 00:21:03.582 { 00:21:03.582 "pci_address": "0000:00:11.0", 00:21:03.582 "trid": { 00:21:03.582 "trtype": "PCIe", 00:21:03.582 "traddr": "0000:00:11.0" 00:21:03.582 }, 00:21:03.582 "ctrlr_data": { 00:21:03.582 "cntlid": 0, 00:21:03.582 "vendor_id": "0x1b36", 00:21:03.582 "model_number": "QEMU NVMe Ctrl", 00:21:03.582 "serial_number": "12341", 00:21:03.582 "firmware_revision": "8.0.0", 00:21:03.582 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:03.582 "oacs": { 00:21:03.582 "security": 0, 00:21:03.582 "format": 1, 00:21:03.582 "firmware": 0, 00:21:03.582 "ns_manage": 1 00:21:03.582 }, 00:21:03.582 "multi_ctrlr": false, 00:21:03.582 "ana_reporting": false 00:21:03.582 }, 00:21:03.582 "vs": { 00:21:03.582 "nvme_version": "1.4" 00:21:03.582 }, 00:21:03.582 "ns_data": { 00:21:03.582 "id": 1, 00:21:03.582 "can_share": false 00:21:03.582 } 00:21:03.582 } 00:21:03.582 ], 00:21:03.582 "mp_policy": "active_passive" 00:21:03.582 } 00:21:03.582 } 00:21:03.582 ]' 00:21:03.582 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.841 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:04.099 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:21:04.099 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:04.099 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=55bf9d65-389e-4b66-bb39-7be2a7b11538 00:21:04.099 04:05:46 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 55bf9d65-389e-4b66-bb39-7be2a7b11538 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:04.358 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:04.617 { 00:21:04.617 "name": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:04.617 "aliases": [ 00:21:04.617 "lvs/nvme0n1p0" 00:21:04.617 ], 00:21:04.617 "product_name": "Logical Volume", 00:21:04.617 "block_size": 4096, 00:21:04.617 "num_blocks": 26476544, 00:21:04.617 "uuid": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:04.617 "assigned_rate_limits": { 00:21:04.617 "rw_ios_per_sec": 0, 00:21:04.617 "rw_mbytes_per_sec": 0, 00:21:04.617 "r_mbytes_per_sec": 0, 00:21:04.617 "w_mbytes_per_sec": 0 00:21:04.617 }, 00:21:04.617 "claimed": false, 00:21:04.617 "zoned": false, 00:21:04.617 "supported_io_types": { 00:21:04.617 "read": true, 00:21:04.617 "write": true, 00:21:04.617 "unmap": true, 00:21:04.617 "flush": false, 00:21:04.617 "reset": true, 00:21:04.617 "nvme_admin": false, 00:21:04.617 "nvme_io": false, 00:21:04.617 "nvme_io_md": false, 00:21:04.617 "write_zeroes": true, 00:21:04.617 "zcopy": false, 00:21:04.617 "get_zone_info": false, 00:21:04.617 "zone_management": false, 00:21:04.617 "zone_append": false, 00:21:04.617 "compare": false, 00:21:04.617 "compare_and_write": false, 00:21:04.617 "abort": false, 00:21:04.617 "seek_hole": true, 00:21:04.617 "seek_data": true, 00:21:04.617 "copy": false, 00:21:04.617 "nvme_iov_md": false 00:21:04.617 }, 00:21:04.617 "driver_specific": { 00:21:04.617 "lvol": { 00:21:04.617 "lvol_store_uuid": "55bf9d65-389e-4b66-bb39-7be2a7b11538", 00:21:04.617 "base_bdev": "nvme0n1", 00:21:04.617 "thin_provision": true, 00:21:04.617 "num_allocated_clusters": 0, 00:21:04.617 "snapshot": false, 00:21:04.617 "clone": false, 00:21:04.617 "esnap_clone": false 00:21:04.617 } 00:21:04.617 } 00:21:04.617 } 00:21:04.617 ]' 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:21:04.617 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:04.876 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.135 { 00:21:05.135 "name": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:05.135 "aliases": [ 00:21:05.135 "lvs/nvme0n1p0" 00:21:05.135 ], 00:21:05.135 "product_name": "Logical Volume", 00:21:05.135 "block_size": 4096, 00:21:05.135 "num_blocks": 26476544, 00:21:05.135 "uuid": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:05.135 "assigned_rate_limits": { 00:21:05.135 "rw_ios_per_sec": 0, 00:21:05.135 "rw_mbytes_per_sec": 0, 00:21:05.135 "r_mbytes_per_sec": 0, 00:21:05.135 "w_mbytes_per_sec": 0 00:21:05.135 }, 00:21:05.135 "claimed": false, 00:21:05.135 "zoned": false, 00:21:05.135 "supported_io_types": { 00:21:05.135 "read": true, 00:21:05.135 "write": true, 00:21:05.135 "unmap": true, 00:21:05.135 "flush": false, 00:21:05.135 "reset": true, 00:21:05.135 "nvme_admin": false, 00:21:05.135 "nvme_io": false, 00:21:05.135 "nvme_io_md": false, 00:21:05.135 "write_zeroes": true, 00:21:05.135 "zcopy": false, 00:21:05.135 "get_zone_info": false, 00:21:05.135 "zone_management": false, 00:21:05.135 "zone_append": false, 00:21:05.135 "compare": false, 00:21:05.135 "compare_and_write": false, 00:21:05.135 "abort": false, 00:21:05.135 "seek_hole": true, 00:21:05.135 "seek_data": true, 00:21:05.135 "copy": false, 00:21:05.135 "nvme_iov_md": false 00:21:05.135 }, 00:21:05.135 "driver_specific": { 00:21:05.135 "lvol": { 00:21:05.135 "lvol_store_uuid": "55bf9d65-389e-4b66-bb39-7be2a7b11538", 00:21:05.135 "base_bdev": "nvme0n1", 00:21:05.135 "thin_provision": true, 00:21:05.135 "num_allocated_clusters": 0, 00:21:05.135 "snapshot": false, 00:21:05.135 "clone": false, 00:21:05.135 "esnap_clone": false 00:21:05.135 } 00:21:05.135 } 00:21:05.135 } 00:21:05.135 ]' 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:21:05.135 04:05:47 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:21:05.393 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:21:05.393 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:21:05.394 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1c8baf76-a0e3-4177-afc8-ca37aaa8151e 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:05.652 { 00:21:05.652 "name": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:05.652 "aliases": [ 00:21:05.652 "lvs/nvme0n1p0" 00:21:05.652 ], 00:21:05.652 "product_name": "Logical Volume", 00:21:05.652 "block_size": 4096, 00:21:05.652 "num_blocks": 26476544, 00:21:05.652 "uuid": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:05.652 "assigned_rate_limits": { 00:21:05.652 "rw_ios_per_sec": 0, 00:21:05.652 "rw_mbytes_per_sec": 0, 00:21:05.652 "r_mbytes_per_sec": 0, 00:21:05.652 "w_mbytes_per_sec": 0 00:21:05.652 }, 00:21:05.652 "claimed": false, 00:21:05.652 "zoned": false, 00:21:05.652 "supported_io_types": { 00:21:05.652 "read": true, 00:21:05.652 "write": true, 00:21:05.652 "unmap": true, 00:21:05.652 "flush": false, 00:21:05.652 "reset": true, 00:21:05.652 "nvme_admin": false, 00:21:05.652 "nvme_io": false, 00:21:05.652 "nvme_io_md": false, 00:21:05.652 "write_zeroes": true, 00:21:05.652 "zcopy": false, 00:21:05.652 "get_zone_info": false, 00:21:05.652 "zone_management": false, 00:21:05.652 "zone_append": false, 00:21:05.652 "compare": false, 00:21:05.652 "compare_and_write": false, 00:21:05.652 "abort": false, 00:21:05.652 "seek_hole": true, 00:21:05.652 "seek_data": true, 00:21:05.652 "copy": false, 00:21:05.652 "nvme_iov_md": false 00:21:05.652 }, 00:21:05.652 "driver_specific": { 00:21:05.652 "lvol": { 00:21:05.652 "lvol_store_uuid": "55bf9d65-389e-4b66-bb39-7be2a7b11538", 00:21:05.652 "base_bdev": "nvme0n1", 00:21:05.652 "thin_provision": true, 00:21:05.652 "num_allocated_clusters": 0, 00:21:05.652 "snapshot": false, 00:21:05.652 "clone": false, 00:21:05.652 "esnap_clone": false 00:21:05.652 } 00:21:05.652 } 00:21:05.652 } 00:21:05.652 ]' 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:21:05.652 04:05:48 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 1c8baf76-a0e3-4177-afc8-ca37aaa8151e -c nvc0n1p0 --l2p_dram_limit 60 00:21:05.933 [2024-12-07 04:05:48.506557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.506605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:05.933 [2024-12-07 04:05:48.506624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:05.933 [2024-12-07 04:05:48.506634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.506711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.506726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:05.933 [2024-12-07 04:05:48.506741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:05.933 [2024-12-07 04:05:48.506752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.506814] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:05.933 [2024-12-07 04:05:48.507848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:05.933 [2024-12-07 04:05:48.507887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.507898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:05.933 [2024-12-07 04:05:48.507913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.090 ms 00:21:05.933 [2024-12-07 04:05:48.507923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.508167] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 1cebcaf1-723d-4861-8c4b-c09a03be06f2 00:21:05.933 [2024-12-07 04:05:48.516439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.516571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:05.933 [2024-12-07 04:05:48.516619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:05.933 [2024-12-07 04:05:48.516664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.527576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.527976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:05.933 [2024-12-07 04:05:48.528038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.568 ms 00:21:05.933 [2024-12-07 04:05:48.528078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.528385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.528435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:05.933 [2024-12-07 04:05:48.528469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:21:05.933 [2024-12-07 04:05:48.528519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.528704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.528750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:05.933 [2024-12-07 04:05:48.528785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:05.933 [2024-12-07 04:05:48.528823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.528921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:05.933 [2024-12-07 04:05:48.540208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.540262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:05.933 [2024-12-07 04:05:48.540295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.315 ms 00:21:05.933 [2024-12-07 04:05:48.540321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.540417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.540440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:05.933 [2024-12-07 04:05:48.540466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:21:05.933 [2024-12-07 04:05:48.540486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.540619] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:05.933 [2024-12-07 04:05:48.540905] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:05.933 [2024-12-07 04:05:48.540994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:05.933 [2024-12-07 04:05:48.541023] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:05.933 [2024-12-07 04:05:48.541054] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:05.933 [2024-12-07 04:05:48.541078] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:05.933 [2024-12-07 04:05:48.541109] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:05.933 [2024-12-07 04:05:48.541129] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:05.933 [2024-12-07 04:05:48.541156] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:05.933 [2024-12-07 04:05:48.541176] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:05.933 [2024-12-07 04:05:48.541203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.541228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:05.933 [2024-12-07 04:05:48.541255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:21:05.933 [2024-12-07 04:05:48.541275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.541490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.933 [2024-12-07 04:05:48.541520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:05.933 [2024-12-07 04:05:48.541544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.138 ms 00:21:05.933 [2024-12-07 04:05:48.541565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.933 [2024-12-07 04:05:48.541799] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:05.933 [2024-12-07 04:05:48.541823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:05.933 [2024-12-07 04:05:48.541854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.933 [2024-12-07 04:05:48.541876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.541901] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:05.933 [2024-12-07 04:05:48.541921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.541964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:05.933 [2024-12-07 04:05:48.541984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:05.933 [2024-12-07 04:05:48.542011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542031] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.933 [2024-12-07 04:05:48.542054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:05.933 [2024-12-07 04:05:48.542073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:05.933 [2024-12-07 04:05:48.542100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:05.933 [2024-12-07 04:05:48.542120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:05.933 [2024-12-07 04:05:48.542157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:05.933 [2024-12-07 04:05:48.542177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:05.933 [2024-12-07 04:05:48.542222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:05.933 [2024-12-07 04:05:48.542247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:05.933 [2024-12-07 04:05:48.542289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.933 [2024-12-07 04:05:48.542330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:05.933 [2024-12-07 04:05:48.542350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.933 [2024-12-07 04:05:48.542392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:05.933 [2024-12-07 04:05:48.542416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.933 [2024-12-07 04:05:48.542458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:05.933 [2024-12-07 04:05:48.542477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:05.933 [2024-12-07 04:05:48.542518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:05.933 [2024-12-07 04:05:48.542546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.933 [2024-12-07 04:05:48.542617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:05.933 [2024-12-07 04:05:48.542636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:05.933 [2024-12-07 04:05:48.542659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:05.933 [2024-12-07 04:05:48.542678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:05.933 [2024-12-07 04:05:48.542700] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:05.933 [2024-12-07 04:05:48.542720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:05.933 [2024-12-07 04:05:48.542762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:05.933 [2024-12-07 04:05:48.542785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542803] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:05.933 [2024-12-07 04:05:48.542830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:05.933 [2024-12-07 04:05:48.542850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:05.933 [2024-12-07 04:05:48.542874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:05.933 [2024-12-07 04:05:48.542894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:05.933 [2024-12-07 04:05:48.542923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:05.933 [2024-12-07 04:05:48.542957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:05.933 [2024-12-07 04:05:48.542981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:05.933 [2024-12-07 04:05:48.542999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:05.933 [2024-12-07 04:05:48.543023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:05.933 [2024-12-07 04:05:48.543045] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:05.933 [2024-12-07 04:05:48.543074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.933 [2024-12-07 04:05:48.543097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:05.933 [2024-12-07 04:05:48.543124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:05.933 [2024-12-07 04:05:48.543146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:05.933 [2024-12-07 04:05:48.543171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:05.933 [2024-12-07 04:05:48.543193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:05.933 [2024-12-07 04:05:48.543221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:05.933 [2024-12-07 04:05:48.543242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:05.933 [2024-12-07 04:05:48.543268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:05.933 [2024-12-07 04:05:48.543289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:05.933 [2024-12-07 04:05:48.543318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:05.933 [2024-12-07 04:05:48.543338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:05.933 [2024-12-07 04:05:48.543363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:05.933 [2024-12-07 04:05:48.543383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:05.933 [2024-12-07 04:05:48.543409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:05.933 [2024-12-07 04:05:48.543431] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:05.934 [2024-12-07 04:05:48.543464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:05.934 [2024-12-07 04:05:48.543491] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:05.934 [2024-12-07 04:05:48.543518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:05.934 [2024-12-07 04:05:48.543539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:05.934 [2024-12-07 04:05:48.543565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:05.934 [2024-12-07 04:05:48.543586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:05.934 [2024-12-07 04:05:48.543612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:05.934 [2024-12-07 04:05:48.543633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:21:05.934 [2024-12-07 04:05:48.543657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:05.934 [2024-12-07 04:05:48.543824] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:05.934 [2024-12-07 04:05:48.543873] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:11.192 [2024-12-07 04:05:52.984768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:52.985084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:11.192 [2024-12-07 04:05:52.985111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4448.151 ms 00:21:11.192 [2024-12-07 04:05:52.985126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.022523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.022577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:11.192 [2024-12-07 04:05:53.022594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.155 ms 00:21:11.192 [2024-12-07 04:05:53.022607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.022785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.022803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:11.192 [2024-12-07 04:05:53.022815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:11.192 [2024-12-07 04:05:53.022831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.078165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.078230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:11.192 [2024-12-07 04:05:53.078250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.352 ms 00:21:11.192 [2024-12-07 04:05:53.078264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.078323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.078338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.192 [2024-12-07 04:05:53.078351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:11.192 [2024-12-07 04:05:53.078363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.078870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.078889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.192 [2024-12-07 04:05:53.078900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:21:11.192 [2024-12-07 04:05:53.078917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.079074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.079093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.192 [2024-12-07 04:05:53.079105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:11.192 [2024-12-07 04:05:53.079122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.100692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.192 [2024-12-07 04:05:53.100734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.192 [2024-12-07 04:05:53.100748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.553 ms 00:21:11.192 [2024-12-07 04:05:53.100762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.192 [2024-12-07 04:05:53.113574] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:11.193 [2024-12-07 04:05:53.130360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.130583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:11.193 [2024-12-07 04:05:53.130618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.475 ms 00:21:11.193 [2024-12-07 04:05:53.130629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.228945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.229318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:11.193 [2024-12-07 04:05:53.229437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.391 ms 00:21:11.193 [2024-12-07 04:05:53.229499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.229780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.229981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:11.193 [2024-12-07 04:05:53.230120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:21:11.193 [2024-12-07 04:05:53.230304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.265634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.265905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:11.193 [2024-12-07 04:05:53.266042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.242 ms 00:21:11.193 [2024-12-07 04:05:53.266102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.301448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.301657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:11.193 [2024-12-07 04:05:53.301786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.282 ms 00:21:11.193 [2024-12-07 04:05:53.301844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.302653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.302856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:11.193 [2024-12-07 04:05:53.302992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:21:11.193 [2024-12-07 04:05:53.303193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.426045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.426303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:11.193 [2024-12-07 04:05:53.426486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 122.806 ms 00:21:11.193 [2024-12-07 04:05:53.426586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.464730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.464868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:11.193 [2024-12-07 04:05:53.465084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.922 ms 00:21:11.193 [2024-12-07 04:05:53.465182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.501809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.502092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:11.193 [2024-12-07 04:05:53.502127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.588 ms 00:21:11.193 [2024-12-07 04:05:53.502139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.538365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.538403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:11.193 [2024-12-07 04:05:53.538420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.217 ms 00:21:11.193 [2024-12-07 04:05:53.538430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.538490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.538502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:11.193 [2024-12-07 04:05:53.538521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:11.193 [2024-12-07 04:05:53.538536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.538702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.193 [2024-12-07 04:05:53.538719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:11.193 [2024-12-07 04:05:53.538733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:11.193 [2024-12-07 04:05:53.538748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.193 [2024-12-07 04:05:53.540029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5041.134 ms, result 0 00:21:11.193 { 00:21:11.193 "name": "ftl0", 00:21:11.193 "uuid": "1cebcaf1-723d-4861-8c4b-c09a03be06f2" 00:21:11.193 } 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:11.193 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:11.452 [ 00:21:11.452 { 00:21:11.452 "name": "ftl0", 00:21:11.452 "aliases": [ 00:21:11.452 "1cebcaf1-723d-4861-8c4b-c09a03be06f2" 00:21:11.452 ], 00:21:11.452 "product_name": "FTL disk", 00:21:11.452 "block_size": 4096, 00:21:11.452 "num_blocks": 20971520, 00:21:11.452 "uuid": "1cebcaf1-723d-4861-8c4b-c09a03be06f2", 00:21:11.452 "assigned_rate_limits": { 00:21:11.452 "rw_ios_per_sec": 0, 00:21:11.452 "rw_mbytes_per_sec": 0, 00:21:11.452 "r_mbytes_per_sec": 0, 00:21:11.452 "w_mbytes_per_sec": 0 00:21:11.452 }, 00:21:11.452 "claimed": false, 00:21:11.452 "zoned": false, 00:21:11.452 "supported_io_types": { 00:21:11.452 "read": true, 00:21:11.452 "write": true, 00:21:11.452 "unmap": true, 00:21:11.452 "flush": true, 00:21:11.452 "reset": false, 00:21:11.452 "nvme_admin": false, 00:21:11.452 "nvme_io": false, 00:21:11.452 "nvme_io_md": false, 00:21:11.452 "write_zeroes": true, 00:21:11.452 "zcopy": false, 00:21:11.452 "get_zone_info": false, 00:21:11.452 "zone_management": false, 00:21:11.452 "zone_append": false, 00:21:11.452 "compare": false, 00:21:11.452 "compare_and_write": false, 00:21:11.452 "abort": false, 00:21:11.452 "seek_hole": false, 00:21:11.452 "seek_data": false, 00:21:11.452 "copy": false, 00:21:11.452 "nvme_iov_md": false 00:21:11.452 }, 00:21:11.452 "driver_specific": { 00:21:11.452 "ftl": { 00:21:11.452 "base_bdev": "1c8baf76-a0e3-4177-afc8-ca37aaa8151e", 00:21:11.452 "cache": "nvc0n1p0" 00:21:11.452 } 00:21:11.452 } 00:21:11.452 } 00:21:11.452 ] 00:21:11.452 04:05:53 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:21:11.452 04:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:21:11.452 04:05:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:11.452 04:05:54 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:21:11.452 04:05:54 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:11.711 [2024-12-07 04:05:54.361063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.361117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:11.711 [2024-12-07 04:05:54.361133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:11.711 [2024-12-07 04:05:54.361164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.361217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:11.711 [2024-12-07 04:05:54.365362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.365535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:11.711 [2024-12-07 04:05:54.365563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.127 ms 00:21:11.711 [2024-12-07 04:05:54.365576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.366498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.366525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:11.711 [2024-12-07 04:05:54.366542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.844 ms 00:21:11.711 [2024-12-07 04:05:54.366554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.369140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.369165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:11.711 [2024-12-07 04:05:54.369179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.544 ms 00:21:11.711 [2024-12-07 04:05:54.369190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.374220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.374251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:11.711 [2024-12-07 04:05:54.374266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.989 ms 00:21:11.711 [2024-12-07 04:05:54.374292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.410208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.410247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:11.711 [2024-12-07 04:05:54.410282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.829 ms 00:21:11.711 [2024-12-07 04:05:54.410292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.432124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.432162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:11.711 [2024-12-07 04:05:54.432181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.800 ms 00:21:11.711 [2024-12-07 04:05:54.432208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.711 [2024-12-07 04:05:54.432491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.711 [2024-12-07 04:05:54.432513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:11.711 [2024-12-07 04:05:54.432526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:11.711 [2024-12-07 04:05:54.432536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.971 [2024-12-07 04:05:54.468510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.971 [2024-12-07 04:05:54.468546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:11.971 [2024-12-07 04:05:54.468564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.977 ms 00:21:11.971 [2024-12-07 04:05:54.468591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.971 [2024-12-07 04:05:54.504899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.971 [2024-12-07 04:05:54.505056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:11.971 [2024-12-07 04:05:54.505082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.295 ms 00:21:11.971 [2024-12-07 04:05:54.505092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.971 [2024-12-07 04:05:54.540077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.971 [2024-12-07 04:05:54.540113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:11.971 [2024-12-07 04:05:54.540129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.967 ms 00:21:11.971 [2024-12-07 04:05:54.540155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.971 [2024-12-07 04:05:54.575342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.971 [2024-12-07 04:05:54.575492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:11.971 [2024-12-07 04:05:54.575517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.073 ms 00:21:11.971 [2024-12-07 04:05:54.575528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.971 [2024-12-07 04:05:54.575616] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:11.971 [2024-12-07 04:05:54.575650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.575989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:11.971 [2024-12-07 04:05:54.576178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:11.972 [2024-12-07 04:05:54.576918] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:11.972 [2024-12-07 04:05:54.576940] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 1cebcaf1-723d-4861-8c4b-c09a03be06f2 00:21:11.972 [2024-12-07 04:05:54.576951] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:11.972 [2024-12-07 04:05:54.576966] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:11.972 [2024-12-07 04:05:54.576976] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:11.972 [2024-12-07 04:05:54.576992] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:11.972 [2024-12-07 04:05:54.577002] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:11.972 [2024-12-07 04:05:54.577015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:11.972 [2024-12-07 04:05:54.577025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:11.972 [2024-12-07 04:05:54.577037] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:11.972 [2024-12-07 04:05:54.577046] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:11.972 [2024-12-07 04:05:54.577059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.972 [2024-12-07 04:05:54.577069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:11.972 [2024-12-07 04:05:54.577083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.465 ms 00:21:11.972 [2024-12-07 04:05:54.577093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.972 [2024-12-07 04:05:54.596632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.972 [2024-12-07 04:05:54.596668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:11.972 [2024-12-07 04:05:54.596683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.482 ms 00:21:11.972 [2024-12-07 04:05:54.596694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.972 [2024-12-07 04:05:54.597255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.972 [2024-12-07 04:05:54.597279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:11.972 [2024-12-07 04:05:54.597293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:21:11.972 [2024-12-07 04:05:54.597304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.972 [2024-12-07 04:05:54.666759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.972 [2024-12-07 04:05:54.666795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:11.972 [2024-12-07 04:05:54.666812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.972 [2024-12-07 04:05:54.666823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.972 [2024-12-07 04:05:54.666901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.972 [2024-12-07 04:05:54.666913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:11.972 [2024-12-07 04:05:54.666939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.972 [2024-12-07 04:05:54.666950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.972 [2024-12-07 04:05:54.667106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.972 [2024-12-07 04:05:54.667124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:11.972 [2024-12-07 04:05:54.667137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.972 [2024-12-07 04:05:54.667148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.972 [2024-12-07 04:05:54.667200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:11.972 [2024-12-07 04:05:54.667211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:11.972 [2024-12-07 04:05:54.667224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:11.972 [2024-12-07 04:05:54.667235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.796059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.796113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:12.232 [2024-12-07 04:05:54.796130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.796157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.893467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.893521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:12.232 [2024-12-07 04:05:54.893538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.893549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.893692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.893705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:12.232 [2024-12-07 04:05:54.893723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.893733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.893852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.893864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:12.232 [2024-12-07 04:05:54.893878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.893887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.894086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.894101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:12.232 [2024-12-07 04:05:54.894115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.894128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.894236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.894249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:12.232 [2024-12-07 04:05:54.894262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.894273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.894334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.894346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:12.232 [2024-12-07 04:05:54.894359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.894371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.894455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:12.232 [2024-12-07 04:05:54.894467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:12.232 [2024-12-07 04:05:54.894480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:12.232 [2024-12-07 04:05:54.894490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:12.232 [2024-12-07 04:05:54.894721] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.507 ms, result 0 00:21:12.232 true 00:21:12.232 04:05:54 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76779 00:21:12.232 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76779 ']' 00:21:12.232 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76779 00:21:12.232 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:21:12.232 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:12.232 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76779 00:21:12.491 killing process with pid 76779 00:21:12.491 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.491 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.491 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76779' 00:21:12.491 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76779 00:21:12.491 04:05:54 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76779 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:17.760 04:05:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:21:17.760 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:21:17.760 fio-3.35 00:21:17.760 Starting 1 thread 00:21:23.157 00:21:23.157 test: (groupid=0, jobs=1): err= 0: pid=77004: Sat Dec 7 04:06:05 2024 00:21:23.157 read: IOPS=864, BW=57.4MiB/s (60.2MB/s)(255MiB/4434msec) 00:21:23.157 slat (usec): min=4, max=137, avg= 9.46, stdev= 4.26 00:21:23.157 clat (usec): min=319, max=3126, avg=519.16, stdev=77.07 00:21:23.157 lat (usec): min=325, max=3136, avg=528.62, stdev=78.21 00:21:23.157 clat percentiles (usec): 00:21:23.157 | 1.00th=[ 379], 5.00th=[ 400], 10.00th=[ 441], 20.00th=[ 469], 00:21:23.157 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 519], 60.00th=[ 537], 00:21:23.157 | 70.00th=[ 562], 80.00th=[ 570], 90.00th=[ 586], 95.00th=[ 603], 00:21:23.157 | 99.00th=[ 685], 99.50th=[ 734], 99.90th=[ 865], 99.95th=[ 1057], 00:21:23.157 | 99.99th=[ 3130] 00:21:23.157 write: IOPS=870, BW=57.8MiB/s (60.6MB/s)(256MiB/4429msec); 0 zone resets 00:21:23.157 slat (usec): min=15, max=149, avg=28.63, stdev= 8.54 00:21:23.157 clat (usec): min=383, max=1598, avg=583.02, stdev=71.48 00:21:23.157 lat (usec): min=400, max=1647, avg=611.66, stdev=73.86 00:21:23.157 clat percentiles (usec): 00:21:23.157 | 1.00th=[ 416], 5.00th=[ 474], 10.00th=[ 494], 20.00th=[ 529], 00:21:23.157 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 586], 60.00th=[ 594], 00:21:23.157 | 70.00th=[ 603], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 676], 00:21:23.157 | 99.00th=[ 848], 99.50th=[ 889], 99.90th=[ 971], 99.95th=[ 1106], 00:21:23.157 | 99.99th=[ 1598] 00:21:23.157 bw ( KiB/s): min=56984, max=65552, per=100.00%, avg=59500.00, stdev=3204.36, samples=8 00:21:23.157 iops : min= 838, max= 964, avg=875.00, stdev=47.12, samples=8 00:21:23.157 lat (usec) : 500=25.41%, 750=73.53%, 1000=0.99% 00:21:23.157 lat (msec) : 2=0.05%, 4=0.01% 00:21:23.157 cpu : usr=98.62%, sys=0.25%, ctx=8, majf=0, minf=1169 00:21:23.157 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:23.157 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.158 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:23.158 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:23.158 latency : target=0, window=0, percentile=100.00%, depth=1 00:21:23.158 00:21:23.158 Run status group 0 (all jobs): 00:21:23.158 READ: bw=57.4MiB/s (60.2MB/s), 57.4MiB/s-57.4MiB/s (60.2MB/s-60.2MB/s), io=255MiB (267MB), run=4434-4434msec 00:21:23.158 WRITE: bw=57.8MiB/s (60.6MB/s), 57.8MiB/s-57.8MiB/s (60.6MB/s-60.6MB/s), io=256MiB (269MB), run=4429-4429msec 00:21:25.061 ----------------------------------------------------- 00:21:25.061 Suppressions used: 00:21:25.061 count bytes template 00:21:25.061 1 5 /usr/src/fio/parse.c 00:21:25.061 1 8 libtcmalloc_minimal.so 00:21:25.061 1 904 libcrypto.so 00:21:25.061 ----------------------------------------------------- 00:21:25.061 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:25.061 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:25.320 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:25.320 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:25.320 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:25.320 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:25.320 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:25.320 04:06:07 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:21:25.579 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:25.580 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:25.580 fio-3.35 00:21:25.580 Starting 2 threads 00:21:57.682 00:21:57.682 first_half: (groupid=0, jobs=1): err= 0: pid=77117: Sat Dec 7 04:06:39 2024 00:21:57.682 read: IOPS=2164, BW=8657KiB/s (8865kB/s)(255MiB/30144msec) 00:21:57.683 slat (nsec): min=3335, max=59718, avg=8472.12, stdev=3165.17 00:21:57.683 clat (usec): min=1083, max=310253, avg=46090.44, stdev=25877.67 00:21:57.683 lat (usec): min=1089, max=310258, avg=46098.91, stdev=25877.71 00:21:57.683 clat percentiles (msec): 00:21:57.683 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:21:57.683 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:21:57.683 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 51], 95.00th=[ 64], 00:21:57.683 | 99.00th=[ 203], 99.50th=[ 224], 99.90th=[ 257], 99.95th=[ 266], 00:21:57.683 | 99.99th=[ 300] 00:21:57.683 write: IOPS=2590, BW=10.1MiB/s (10.6MB/s)(256MiB/25298msec); 0 zone resets 00:21:57.683 slat (usec): min=4, max=901, avg= 9.96, stdev=11.94 00:21:57.683 clat (usec): min=551, max=99784, avg=12948.36, stdev=21134.30 00:21:57.683 lat (usec): min=563, max=99793, avg=12958.32, stdev=21134.63 00:21:57.683 clat percentiles (usec): 00:21:57.683 | 1.00th=[ 1205], 5.00th=[ 1582], 10.00th=[ 1795], 20.00th=[ 2147], 00:21:57.683 | 30.00th=[ 3916], 40.00th=[ 6259], 50.00th=[ 7767], 60.00th=[ 8979], 00:21:57.683 | 70.00th=[10159], 80.00th=[11863], 90.00th=[14353], 95.00th=[84411], 00:21:57.683 | 99.00th=[89654], 99.50th=[91751], 99.90th=[94897], 99.95th=[95945], 00:21:57.683 | 99.99th=[98042] 00:21:57.683 bw ( KiB/s): min= 952, max=41016, per=100.00%, avg=20968.28, stdev=12706.32, samples=25 00:21:57.683 iops : min= 238, max=10254, avg=5242.04, stdev=3176.53, samples=25 00:21:57.683 lat (usec) : 750=0.02%, 1000=0.08% 00:21:57.683 lat (msec) : 2=8.35%, 4=7.03%, 10=19.82%, 20=11.38%, 50=44.52% 00:21:57.683 lat (msec) : 100=7.10%, 250=1.63%, 500=0.07% 00:21:57.683 cpu : usr=99.23%, sys=0.17%, ctx=41, majf=0, minf=5569 00:21:57.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:57.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.683 complete : 0=0.0%, 4=99.9%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:57.683 issued rwts: total=65240,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:57.683 second_half: (groupid=0, jobs=1): err= 0: pid=77118: Sat Dec 7 04:06:39 2024 00:21:57.683 read: IOPS=2155, BW=8620KiB/s (8827kB/s)(255MiB/30278msec) 00:21:57.683 slat (usec): min=3, max=119, avg= 8.55, stdev= 2.87 00:21:57.683 clat (usec): min=1132, max=315289, avg=45256.23, stdev=26733.84 00:21:57.683 lat (usec): min=1145, max=315301, avg=45264.78, stdev=26734.03 00:21:57.683 clat percentiles (msec): 00:21:57.683 | 1.00th=[ 8], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 37], 00:21:57.683 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:21:57.683 | 70.00th=[ 43], 80.00th=[ 44], 90.00th=[ 50], 95.00th=[ 54], 00:21:57.683 | 99.00th=[ 203], 99.50th=[ 222], 99.90th=[ 257], 99.95th=[ 271], 00:21:57.683 | 99.99th=[ 309] 00:21:57.683 write: IOPS=2701, BW=10.6MiB/s (11.1MB/s)(256MiB/24260msec); 0 zone resets 00:21:57.683 slat (usec): min=4, max=680, avg= 9.91, stdev= 7.37 00:21:57.683 clat (usec): min=541, max=99318, avg=14048.79, stdev=22087.62 00:21:57.683 lat (usec): min=554, max=99334, avg=14058.70, stdev=22088.02 00:21:57.683 clat percentiles (usec): 00:21:57.683 | 1.00th=[ 1172], 5.00th=[ 1532], 10.00th=[ 1795], 20.00th=[ 2180], 00:21:57.683 | 30.00th=[ 3851], 40.00th=[ 6259], 50.00th=[ 7898], 60.00th=[ 9110], 00:21:57.683 | 70.00th=[10421], 80.00th=[12256], 90.00th=[38011], 95.00th=[84411], 00:21:57.683 | 99.00th=[90702], 99.50th=[92799], 99.90th=[95945], 99.95th=[96994], 00:21:57.683 | 99.99th=[99091] 00:21:57.683 bw ( KiB/s): min= 880, max=43136, per=97.30%, avg=20164.92, stdev=11834.21, samples=26 00:21:57.683 iops : min= 220, max=10784, avg=5041.23, stdev=2958.55, samples=26 00:21:57.683 lat (usec) : 750=0.01%, 1000=0.12% 00:21:57.683 lat (msec) : 2=7.92%, 4=7.47%, 10=19.91%, 20=10.84%, 50=45.10% 00:21:57.683 lat (msec) : 100=6.90%, 250=1.68%, 500=0.06% 00:21:57.683 cpu : usr=99.21%, sys=0.16%, ctx=59, majf=0, minf=5548 00:21:57.683 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:57.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:57.683 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:57.683 issued rwts: total=65252,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:57.683 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:57.683 00:21:57.683 Run status group 0 (all jobs): 00:21:57.683 READ: bw=16.8MiB/s (17.7MB/s), 8620KiB/s-8657KiB/s (8827kB/s-8865kB/s), io=510MiB (534MB), run=30144-30278msec 00:21:57.683 WRITE: bw=20.2MiB/s (21.2MB/s), 10.1MiB/s-10.6MiB/s (10.6MB/s-11.1MB/s), io=512MiB (537MB), run=24260-25298msec 00:21:59.597 ----------------------------------------------------- 00:21:59.597 Suppressions used: 00:21:59.597 count bytes template 00:21:59.597 2 10 /usr/src/fio/parse.c 00:21:59.597 2 192 /usr/src/fio/iolog.c 00:21:59.597 1 8 libtcmalloc_minimal.so 00:21:59.597 1 904 libcrypto.so 00:21:59.597 ----------------------------------------------------- 00:21:59.597 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:59.597 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:21:59.598 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:59.598 04:06:42 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:21:59.856 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:21:59.856 fio-3.35 00:21:59.856 Starting 1 thread 00:22:17.954 00:22:17.954 test: (groupid=0, jobs=1): err= 0: pid=77493: Sat Dec 7 04:07:00 2024 00:22:17.954 read: IOPS=6232, BW=24.3MiB/s (25.5MB/s)(255MiB/10461msec) 00:22:17.954 slat (nsec): min=3224, max=43362, avg=8020.77, stdev=3645.85 00:22:17.954 clat (usec): min=674, max=45857, avg=20524.86, stdev=1291.54 00:22:17.954 lat (usec): min=686, max=45882, avg=20532.88, stdev=1291.48 00:22:17.954 clat percentiles (usec): 00:22:17.954 | 1.00th=[19268], 5.00th=[19530], 10.00th=[19792], 20.00th=[20055], 00:22:17.954 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20579], 00:22:17.954 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21103], 95.00th=[21627], 00:22:17.954 | 99.00th=[25035], 99.50th=[29492], 99.90th=[32900], 99.95th=[38011], 00:22:17.954 | 99.99th=[44303] 00:22:17.954 write: IOPS=10.6k, BW=41.3MiB/s (43.3MB/s)(256MiB/6195msec); 0 zone resets 00:22:17.954 slat (usec): min=4, max=813, avg= 8.84, stdev=10.03 00:22:17.954 clat (usec): min=708, max=66950, avg=12044.04, stdev=14569.74 00:22:17.954 lat (usec): min=715, max=66958, avg=12052.87, stdev=14569.79 00:22:17.954 clat percentiles (usec): 00:22:17.954 | 1.00th=[ 1237], 5.00th=[ 1483], 10.00th=[ 1680], 20.00th=[ 1909], 00:22:17.954 | 30.00th=[ 2114], 40.00th=[ 2507], 50.00th=[ 8029], 60.00th=[ 9634], 00:22:17.954 | 70.00th=[10683], 80.00th=[12780], 90.00th=[43779], 95.00th=[45351], 00:22:17.954 | 99.00th=[47449], 99.50th=[48497], 99.90th=[51119], 99.95th=[53216], 00:22:17.954 | 99.99th=[62129] 00:22:17.954 bw ( KiB/s): min=13992, max=55808, per=95.31%, avg=40329.85, stdev=10189.35, samples=13 00:22:17.954 iops : min= 3498, max=13952, avg=10082.46, stdev=2547.34, samples=13 00:22:17.954 lat (usec) : 750=0.01%, 1000=0.04% 00:22:17.954 lat (msec) : 2=12.37%, 4=8.50%, 10=10.97%, 20=20.92%, 50=47.12% 00:22:17.954 lat (msec) : 100=0.08% 00:22:17.954 cpu : usr=98.93%, sys=0.29%, ctx=34, majf=0, minf=5565 00:22:17.954 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:17.954 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.954 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:17.954 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.954 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:17.954 00:22:17.954 Run status group 0 (all jobs): 00:22:17.954 READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=255MiB (267MB), run=10461-10461msec 00:22:17.954 WRITE: bw=41.3MiB/s (43.3MB/s), 41.3MiB/s-41.3MiB/s (43.3MB/s-43.3MB/s), io=256MiB (268MB), run=6195-6195msec 00:22:19.913 ----------------------------------------------------- 00:22:19.913 Suppressions used: 00:22:19.913 count bytes template 00:22:19.913 1 5 /usr/src/fio/parse.c 00:22:19.913 2 192 /usr/src/fio/iolog.c 00:22:19.913 1 8 libtcmalloc_minimal.so 00:22:19.913 1 904 libcrypto.so 00:22:19.913 ----------------------------------------------------- 00:22:19.913 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:19.913 Remove shared memory files 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57733 /dev/shm/spdk_tgt_trace.pid75673 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:22:19.913 ************************************ 00:22:19.913 END TEST ftl_fio_basic 00:22:19.913 ************************************ 00:22:19.913 00:22:19.913 real 1m18.156s 00:22:19.913 user 2m53.322s 00:22:19.913 sys 0m4.034s 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:19.913 04:07:02 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:22:19.913 04:07:02 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:19.913 04:07:02 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:19.913 04:07:02 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:19.913 04:07:02 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:19.913 ************************************ 00:22:19.913 START TEST ftl_bdevperf 00:22:19.913 ************************************ 00:22:19.913 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:22:20.172 * Looking for test storage... 00:22:20.172 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:20.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.172 --rc genhtml_branch_coverage=1 00:22:20.172 --rc genhtml_function_coverage=1 00:22:20.172 --rc genhtml_legend=1 00:22:20.172 --rc geninfo_all_blocks=1 00:22:20.172 --rc geninfo_unexecuted_blocks=1 00:22:20.172 00:22:20.172 ' 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:20.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.172 --rc genhtml_branch_coverage=1 00:22:20.172 --rc genhtml_function_coverage=1 00:22:20.172 --rc genhtml_legend=1 00:22:20.172 --rc geninfo_all_blocks=1 00:22:20.172 --rc geninfo_unexecuted_blocks=1 00:22:20.172 00:22:20.172 ' 00:22:20.172 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:20.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.172 --rc genhtml_branch_coverage=1 00:22:20.172 --rc genhtml_function_coverage=1 00:22:20.172 --rc genhtml_legend=1 00:22:20.172 --rc geninfo_all_blocks=1 00:22:20.172 --rc geninfo_unexecuted_blocks=1 00:22:20.172 00:22:20.172 ' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:20.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:20.173 --rc genhtml_branch_coverage=1 00:22:20.173 --rc genhtml_function_coverage=1 00:22:20.173 --rc genhtml_legend=1 00:22:20.173 --rc geninfo_all_blocks=1 00:22:20.173 --rc geninfo_unexecuted_blocks=1 00:22:20.173 00:22:20.173 ' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77771 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77771 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77771 ']' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.173 04:07:02 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:20.431 [2024-12-07 04:07:02.936735] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:22:20.431 [2024-12-07 04:07:02.936924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77771 ] 00:22:20.431 [2024-12-07 04:07:03.116860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.690 [2024-12-07 04:07:03.223623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:22:21.260 04:07:03 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:21.518 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:21.777 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:21.777 { 00:22:21.777 "name": "nvme0n1", 00:22:21.777 "aliases": [ 00:22:21.777 "696c87c3-5c43-4ada-8c25-f65d5f222d73" 00:22:21.777 ], 00:22:21.777 "product_name": "NVMe disk", 00:22:21.777 "block_size": 4096, 00:22:21.777 "num_blocks": 1310720, 00:22:21.777 "uuid": "696c87c3-5c43-4ada-8c25-f65d5f222d73", 00:22:21.777 "numa_id": -1, 00:22:21.777 "assigned_rate_limits": { 00:22:21.777 "rw_ios_per_sec": 0, 00:22:21.777 "rw_mbytes_per_sec": 0, 00:22:21.777 "r_mbytes_per_sec": 0, 00:22:21.777 "w_mbytes_per_sec": 0 00:22:21.777 }, 00:22:21.777 "claimed": true, 00:22:21.777 "claim_type": "read_many_write_one", 00:22:21.777 "zoned": false, 00:22:21.777 "supported_io_types": { 00:22:21.777 "read": true, 00:22:21.777 "write": true, 00:22:21.777 "unmap": true, 00:22:21.777 "flush": true, 00:22:21.777 "reset": true, 00:22:21.777 "nvme_admin": true, 00:22:21.777 "nvme_io": true, 00:22:21.777 "nvme_io_md": false, 00:22:21.777 "write_zeroes": true, 00:22:21.777 "zcopy": false, 00:22:21.777 "get_zone_info": false, 00:22:21.777 "zone_management": false, 00:22:21.777 "zone_append": false, 00:22:21.777 "compare": true, 00:22:21.777 "compare_and_write": false, 00:22:21.777 "abort": true, 00:22:21.777 "seek_hole": false, 00:22:21.777 "seek_data": false, 00:22:21.777 "copy": true, 00:22:21.777 "nvme_iov_md": false 00:22:21.777 }, 00:22:21.777 "driver_specific": { 00:22:21.777 "nvme": [ 00:22:21.777 { 00:22:21.778 "pci_address": "0000:00:11.0", 00:22:21.778 "trid": { 00:22:21.778 "trtype": "PCIe", 00:22:21.778 "traddr": "0000:00:11.0" 00:22:21.778 }, 00:22:21.778 "ctrlr_data": { 00:22:21.778 "cntlid": 0, 00:22:21.778 "vendor_id": "0x1b36", 00:22:21.778 "model_number": "QEMU NVMe Ctrl", 00:22:21.778 "serial_number": "12341", 00:22:21.778 "firmware_revision": "8.0.0", 00:22:21.778 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:21.778 "oacs": { 00:22:21.778 "security": 0, 00:22:21.778 "format": 1, 00:22:21.778 "firmware": 0, 00:22:21.778 "ns_manage": 1 00:22:21.778 }, 00:22:21.778 "multi_ctrlr": false, 00:22:21.778 "ana_reporting": false 00:22:21.778 }, 00:22:21.778 "vs": { 00:22:21.778 "nvme_version": "1.4" 00:22:21.778 }, 00:22:21.778 "ns_data": { 00:22:21.778 "id": 1, 00:22:21.778 "can_share": false 00:22:21.778 } 00:22:21.778 } 00:22:21.778 ], 00:22:21.778 "mp_policy": "active_passive" 00:22:21.778 } 00:22:21.778 } 00:22:21.778 ]' 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:21.778 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:22.037 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=55bf9d65-389e-4b66-bb39-7be2a7b11538 00:22:22.037 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:22:22.037 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 55bf9d65-389e-4b66-bb39-7be2a7b11538 00:22:22.295 04:07:04 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:22.295 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=27cad91a-3dd0-4350-b2e4-5ca3884868cb 00:22:22.295 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 27cad91a-3dd0-4350-b2e4-5ca3884868cb 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:22.555 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:22.814 { 00:22:22.814 "name": "2962afb3-6dd2-404e-abb6-1c6725c5b86b", 00:22:22.814 "aliases": [ 00:22:22.814 "lvs/nvme0n1p0" 00:22:22.814 ], 00:22:22.814 "product_name": "Logical Volume", 00:22:22.814 "block_size": 4096, 00:22:22.814 "num_blocks": 26476544, 00:22:22.814 "uuid": "2962afb3-6dd2-404e-abb6-1c6725c5b86b", 00:22:22.814 "assigned_rate_limits": { 00:22:22.814 "rw_ios_per_sec": 0, 00:22:22.814 "rw_mbytes_per_sec": 0, 00:22:22.814 "r_mbytes_per_sec": 0, 00:22:22.814 "w_mbytes_per_sec": 0 00:22:22.814 }, 00:22:22.814 "claimed": false, 00:22:22.814 "zoned": false, 00:22:22.814 "supported_io_types": { 00:22:22.814 "read": true, 00:22:22.814 "write": true, 00:22:22.814 "unmap": true, 00:22:22.814 "flush": false, 00:22:22.814 "reset": true, 00:22:22.814 "nvme_admin": false, 00:22:22.814 "nvme_io": false, 00:22:22.814 "nvme_io_md": false, 00:22:22.814 "write_zeroes": true, 00:22:22.814 "zcopy": false, 00:22:22.814 "get_zone_info": false, 00:22:22.814 "zone_management": false, 00:22:22.814 "zone_append": false, 00:22:22.814 "compare": false, 00:22:22.814 "compare_and_write": false, 00:22:22.814 "abort": false, 00:22:22.814 "seek_hole": true, 00:22:22.814 "seek_data": true, 00:22:22.814 "copy": false, 00:22:22.814 "nvme_iov_md": false 00:22:22.814 }, 00:22:22.814 "driver_specific": { 00:22:22.814 "lvol": { 00:22:22.814 "lvol_store_uuid": "27cad91a-3dd0-4350-b2e4-5ca3884868cb", 00:22:22.814 "base_bdev": "nvme0n1", 00:22:22.814 "thin_provision": true, 00:22:22.814 "num_allocated_clusters": 0, 00:22:22.814 "snapshot": false, 00:22:22.814 "clone": false, 00:22:22.814 "esnap_clone": false 00:22:22.814 } 00:22:22.814 } 00:22:22.814 } 00:22:22.814 ]' 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:22:22.814 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:23.073 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:23.332 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.332 { 00:22:23.332 "name": "2962afb3-6dd2-404e-abb6-1c6725c5b86b", 00:22:23.332 "aliases": [ 00:22:23.332 "lvs/nvme0n1p0" 00:22:23.332 ], 00:22:23.332 "product_name": "Logical Volume", 00:22:23.332 "block_size": 4096, 00:22:23.332 "num_blocks": 26476544, 00:22:23.332 "uuid": "2962afb3-6dd2-404e-abb6-1c6725c5b86b", 00:22:23.332 "assigned_rate_limits": { 00:22:23.332 "rw_ios_per_sec": 0, 00:22:23.332 "rw_mbytes_per_sec": 0, 00:22:23.332 "r_mbytes_per_sec": 0, 00:22:23.332 "w_mbytes_per_sec": 0 00:22:23.332 }, 00:22:23.332 "claimed": false, 00:22:23.332 "zoned": false, 00:22:23.332 "supported_io_types": { 00:22:23.332 "read": true, 00:22:23.332 "write": true, 00:22:23.332 "unmap": true, 00:22:23.332 "flush": false, 00:22:23.332 "reset": true, 00:22:23.332 "nvme_admin": false, 00:22:23.332 "nvme_io": false, 00:22:23.332 "nvme_io_md": false, 00:22:23.332 "write_zeroes": true, 00:22:23.332 "zcopy": false, 00:22:23.332 "get_zone_info": false, 00:22:23.332 "zone_management": false, 00:22:23.332 "zone_append": false, 00:22:23.332 "compare": false, 00:22:23.332 "compare_and_write": false, 00:22:23.332 "abort": false, 00:22:23.332 "seek_hole": true, 00:22:23.332 "seek_data": true, 00:22:23.332 "copy": false, 00:22:23.332 "nvme_iov_md": false 00:22:23.332 }, 00:22:23.332 "driver_specific": { 00:22:23.332 "lvol": { 00:22:23.332 "lvol_store_uuid": "27cad91a-3dd0-4350-b2e4-5ca3884868cb", 00:22:23.332 "base_bdev": "nvme0n1", 00:22:23.332 "thin_provision": true, 00:22:23.332 "num_allocated_clusters": 0, 00:22:23.332 "snapshot": false, 00:22:23.332 "clone": false, 00:22:23.332 "esnap_clone": false 00:22:23.332 } 00:22:23.332 } 00:22:23.332 } 00:22:23.332 ]' 00:22:23.332 04:07:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:22:23.332 04:07:06 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:22:23.591 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2962afb3-6dd2-404e-abb6-1c6725c5b86b 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:23.851 { 00:22:23.851 "name": "2962afb3-6dd2-404e-abb6-1c6725c5b86b", 00:22:23.851 "aliases": [ 00:22:23.851 "lvs/nvme0n1p0" 00:22:23.851 ], 00:22:23.851 "product_name": "Logical Volume", 00:22:23.851 "block_size": 4096, 00:22:23.851 "num_blocks": 26476544, 00:22:23.851 "uuid": "2962afb3-6dd2-404e-abb6-1c6725c5b86b", 00:22:23.851 "assigned_rate_limits": { 00:22:23.851 "rw_ios_per_sec": 0, 00:22:23.851 "rw_mbytes_per_sec": 0, 00:22:23.851 "r_mbytes_per_sec": 0, 00:22:23.851 "w_mbytes_per_sec": 0 00:22:23.851 }, 00:22:23.851 "claimed": false, 00:22:23.851 "zoned": false, 00:22:23.851 "supported_io_types": { 00:22:23.851 "read": true, 00:22:23.851 "write": true, 00:22:23.851 "unmap": true, 00:22:23.851 "flush": false, 00:22:23.851 "reset": true, 00:22:23.851 "nvme_admin": false, 00:22:23.851 "nvme_io": false, 00:22:23.851 "nvme_io_md": false, 00:22:23.851 "write_zeroes": true, 00:22:23.851 "zcopy": false, 00:22:23.851 "get_zone_info": false, 00:22:23.851 "zone_management": false, 00:22:23.851 "zone_append": false, 00:22:23.851 "compare": false, 00:22:23.851 "compare_and_write": false, 00:22:23.851 "abort": false, 00:22:23.851 "seek_hole": true, 00:22:23.851 "seek_data": true, 00:22:23.851 "copy": false, 00:22:23.851 "nvme_iov_md": false 00:22:23.851 }, 00:22:23.851 "driver_specific": { 00:22:23.851 "lvol": { 00:22:23.851 "lvol_store_uuid": "27cad91a-3dd0-4350-b2e4-5ca3884868cb", 00:22:23.851 "base_bdev": "nvme0n1", 00:22:23.851 "thin_provision": true, 00:22:23.851 "num_allocated_clusters": 0, 00:22:23.851 "snapshot": false, 00:22:23.851 "clone": false, 00:22:23.851 "esnap_clone": false 00:22:23.851 } 00:22:23.851 } 00:22:23.851 } 00:22:23.851 ]' 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:22:23.851 04:07:06 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 2962afb3-6dd2-404e-abb6-1c6725c5b86b -c nvc0n1p0 --l2p_dram_limit 20 00:22:24.112 [2024-12-07 04:07:06.748490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.748543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.112 [2024-12-07 04:07:06.748558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:24.112 [2024-12-07 04:07:06.748570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.748654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.748668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.112 [2024-12-07 04:07:06.748679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:22:24.112 [2024-12-07 04:07:06.748691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.748709] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.112 [2024-12-07 04:07:06.749763] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.112 [2024-12-07 04:07:06.749796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.749813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.112 [2024-12-07 04:07:06.749825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.092 ms 00:22:24.112 [2024-12-07 04:07:06.749840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.749945] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 21aa3db6-10d8-4a98-a67a-16b207b9714a 00:22:24.112 [2024-12-07 04:07:06.751397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.751431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:24.112 [2024-12-07 04:07:06.751456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:24.112 [2024-12-07 04:07:06.751466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.759054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.759083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.112 [2024-12-07 04:07:06.759096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.555 ms 00:22:24.112 [2024-12-07 04:07:06.759108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.759231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.759246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.112 [2024-12-07 04:07:06.759264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:24.112 [2024-12-07 04:07:06.759277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.759344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.759357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.112 [2024-12-07 04:07:06.759369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:24.112 [2024-12-07 04:07:06.759379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.759407] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:24.112 [2024-12-07 04:07:06.764399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.764448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.112 [2024-12-07 04:07:06.764460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.012 ms 00:22:24.112 [2024-12-07 04:07:06.764477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.764528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.112 [2024-12-07 04:07:06.764541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.112 [2024-12-07 04:07:06.764552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:24.112 [2024-12-07 04:07:06.764564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.112 [2024-12-07 04:07:06.764595] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:24.112 [2024-12-07 04:07:06.764753] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.113 [2024-12-07 04:07:06.764771] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.113 [2024-12-07 04:07:06.764796] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.113 [2024-12-07 04:07:06.764812] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.113 [2024-12-07 04:07:06.764827] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.113 [2024-12-07 04:07:06.764854] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:24.113 [2024-12-07 04:07:06.764866] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.113 [2024-12-07 04:07:06.764876] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.113 [2024-12-07 04:07:06.764890] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.113 [2024-12-07 04:07:06.764903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.113 [2024-12-07 04:07:06.764916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.113 [2024-12-07 04:07:06.764926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:22:24.113 [2024-12-07 04:07:06.764939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.113 [2024-12-07 04:07:06.765029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.113 [2024-12-07 04:07:06.765043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.113 [2024-12-07 04:07:06.765054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:24.113 [2024-12-07 04:07:06.765068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.113 [2024-12-07 04:07:06.765152] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.113 [2024-12-07 04:07:06.765181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.113 [2024-12-07 04:07:06.765192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.113 [2024-12-07 04:07:06.765227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.113 [2024-12-07 04:07:06.765258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.113 [2024-12-07 04:07:06.765279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.113 [2024-12-07 04:07:06.765303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:24.113 [2024-12-07 04:07:06.765312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.113 [2024-12-07 04:07:06.765327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.113 [2024-12-07 04:07:06.765337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:24.113 [2024-12-07 04:07:06.765416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.113 [2024-12-07 04:07:06.765437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765458] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.113 [2024-12-07 04:07:06.765467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.113 [2024-12-07 04:07:06.765501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.113 [2024-12-07 04:07:06.765531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.113 [2024-12-07 04:07:06.765563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.113 [2024-12-07 04:07:06.765595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.113 [2024-12-07 04:07:06.765616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.113 [2024-12-07 04:07:06.765628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:24.113 [2024-12-07 04:07:06.765637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.113 [2024-12-07 04:07:06.765650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.113 [2024-12-07 04:07:06.765659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:24.113 [2024-12-07 04:07:06.765670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.113 [2024-12-07 04:07:06.765691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:24.113 [2024-12-07 04:07:06.765700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765711] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.113 [2024-12-07 04:07:06.765721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.113 [2024-12-07 04:07:06.765735] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765744] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.113 [2024-12-07 04:07:06.765759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.113 [2024-12-07 04:07:06.765768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.113 [2024-12-07 04:07:06.765780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.113 [2024-12-07 04:07:06.765789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.113 [2024-12-07 04:07:06.765801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.113 [2024-12-07 04:07:06.765811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.113 [2024-12-07 04:07:06.765824] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.113 [2024-12-07 04:07:06.765837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.765851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:24.113 [2024-12-07 04:07:06.765861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:24.113 [2024-12-07 04:07:06.765874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:24.113 [2024-12-07 04:07:06.765884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:24.113 [2024-12-07 04:07:06.765897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:24.113 [2024-12-07 04:07:06.765907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:24.113 [2024-12-07 04:07:06.765919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:24.113 [2024-12-07 04:07:06.765940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:24.113 [2024-12-07 04:07:06.765957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:24.113 [2024-12-07 04:07:06.765968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.765980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.765991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.766003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.766014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:24.113 [2024-12-07 04:07:06.766027] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.113 [2024-12-07 04:07:06.766039] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.766056] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.113 [2024-12-07 04:07:06.766067] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.113 [2024-12-07 04:07:06.766079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.113 [2024-12-07 04:07:06.766090] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.113 [2024-12-07 04:07:06.766103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.113 [2024-12-07 04:07:06.766114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.114 [2024-12-07 04:07:06.766129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:22:24.114 [2024-12-07 04:07:06.766138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.114 [2024-12-07 04:07:06.766177] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:24.114 [2024-12-07 04:07:06.766200] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:28.314 [2024-12-07 04:07:10.358157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.358230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:28.314 [2024-12-07 04:07:10.358249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3597.804 ms 00:22:28.314 [2024-12-07 04:07:10.358259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.395264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.395319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.314 [2024-12-07 04:07:10.395337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.642 ms 00:22:28.314 [2024-12-07 04:07:10.395348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.395463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.395477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:28.314 [2024-12-07 04:07:10.395493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:22:28.314 [2024-12-07 04:07:10.395503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.466409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.466456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.314 [2024-12-07 04:07:10.466475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.965 ms 00:22:28.314 [2024-12-07 04:07:10.466486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.466526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.466537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.314 [2024-12-07 04:07:10.466549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:28.314 [2024-12-07 04:07:10.466561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.467080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.467104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.314 [2024-12-07 04:07:10.467119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.463 ms 00:22:28.314 [2024-12-07 04:07:10.467130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.467239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.467254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.314 [2024-12-07 04:07:10.467270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:22:28.314 [2024-12-07 04:07:10.467280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.485950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.485986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.314 [2024-12-07 04:07:10.486002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.675 ms 00:22:28.314 [2024-12-07 04:07:10.486023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.498304] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:22:28.314 [2024-12-07 04:07:10.504223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.504260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:28.314 [2024-12-07 04:07:10.504273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.150 ms 00:22:28.314 [2024-12-07 04:07:10.504285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.597532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.597613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:28.314 [2024-12-07 04:07:10.597631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.371 ms 00:22:28.314 [2024-12-07 04:07:10.597644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.597853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.597878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:28.314 [2024-12-07 04:07:10.597890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.166 ms 00:22:28.314 [2024-12-07 04:07:10.597906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.632693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.632736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:28.314 [2024-12-07 04:07:10.632750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.781 ms 00:22:28.314 [2024-12-07 04:07:10.632763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.666524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.666566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:28.314 [2024-12-07 04:07:10.666596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.778 ms 00:22:28.314 [2024-12-07 04:07:10.666609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.667309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.667341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:28.314 [2024-12-07 04:07:10.667353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:22:28.314 [2024-12-07 04:07:10.667367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.764570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.764634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:28.314 [2024-12-07 04:07:10.764649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 97.309 ms 00:22:28.314 [2024-12-07 04:07:10.764662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.799956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.800001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:28.314 [2024-12-07 04:07:10.800017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.276 ms 00:22:28.314 [2024-12-07 04:07:10.800030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.833372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.833416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:28.314 [2024-12-07 04:07:10.833430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.359 ms 00:22:28.314 [2024-12-07 04:07:10.833442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.867801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.867844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:28.314 [2024-12-07 04:07:10.867858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.379 ms 00:22:28.314 [2024-12-07 04:07:10.867870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.867910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.314 [2024-12-07 04:07:10.867935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:28.314 [2024-12-07 04:07:10.867946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:28.314 [2024-12-07 04:07:10.867959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.314 [2024-12-07 04:07:10.868076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.315 [2024-12-07 04:07:10.868096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:28.315 [2024-12-07 04:07:10.868107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:28.315 [2024-12-07 04:07:10.868120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.315 [2024-12-07 04:07:10.869171] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4126.886 ms, result 0 00:22:28.315 { 00:22:28.315 "name": "ftl0", 00:22:28.315 "uuid": "21aa3db6-10d8-4a98-a67a-16b207b9714a" 00:22:28.315 } 00:22:28.315 04:07:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:22:28.315 04:07:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:22:28.315 04:07:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:22:28.574 04:07:11 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:22:28.574 [2024-12-07 04:07:11.164864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:28.574 I/O size of 69632 is greater than zero copy threshold (65536). 00:22:28.574 Zero copy mechanism will not be used. 00:22:28.574 Running I/O for 4 seconds... 00:22:30.448 1346.00 IOPS, 89.38 MiB/s [2024-12-07T04:07:14.575Z] 1384.50 IOPS, 91.94 MiB/s [2024-12-07T04:07:15.510Z] 1409.00 IOPS, 93.57 MiB/s [2024-12-07T04:07:15.510Z] 1441.75 IOPS, 95.74 MiB/s 00:22:32.774 Latency(us) 00:22:32.774 [2024-12-07T04:07:15.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.774 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:22:32.774 ftl0 : 4.00 1441.47 95.72 0.00 0.00 729.38 218.78 2052.93 00:22:32.774 [2024-12-07T04:07:15.510Z] =================================================================================================================== 00:22:32.774 [2024-12-07T04:07:15.510Z] Total : 1441.47 95.72 0.00 0.00 729.38 218.78 2052.93 00:22:32.774 [2024-12-07 04:07:15.168898] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:32.774 { 00:22:32.774 "results": [ 00:22:32.774 { 00:22:32.774 "job": "ftl0", 00:22:32.774 "core_mask": "0x1", 00:22:32.774 "workload": "randwrite", 00:22:32.774 "status": "finished", 00:22:32.774 "queue_depth": 1, 00:22:32.774 "io_size": 69632, 00:22:32.774 "runtime": 4.001475, 00:22:32.774 "iops": 1441.468458505926, 00:22:32.774 "mibps": 95.72251482265915, 00:22:32.774 "io_failed": 0, 00:22:32.774 "io_timeout": 0, 00:22:32.774 "avg_latency_us": 729.3799040823488, 00:22:32.774 "min_latency_us": 218.78232931726907, 00:22:32.774 "max_latency_us": 2052.9349397590363 00:22:32.774 } 00:22:32.774 ], 00:22:32.774 "core_count": 1 00:22:32.774 } 00:22:32.774 04:07:15 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:22:32.774 [2024-12-07 04:07:15.301754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:32.774 Running I/O for 4 seconds... 00:22:34.708 11437.00 IOPS, 44.68 MiB/s [2024-12-07T04:07:18.381Z] 11482.00 IOPS, 44.85 MiB/s [2024-12-07T04:07:19.318Z] 11505.67 IOPS, 44.94 MiB/s [2024-12-07T04:07:19.578Z] 11498.25 IOPS, 44.92 MiB/s 00:22:36.842 Latency(us) 00:22:36.842 [2024-12-07T04:07:19.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.842 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:22:36.842 ftl0 : 4.02 11485.08 44.86 0.00 0.00 11122.29 223.72 33057.52 00:22:36.842 [2024-12-07T04:07:19.578Z] =================================================================================================================== 00:22:36.842 [2024-12-07T04:07:19.578Z] Total : 11485.08 44.86 0.00 0.00 11122.29 0.00 33057.52 00:22:36.842 [2024-12-07 04:07:19.319583] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:36.842 { 00:22:36.842 "results": [ 00:22:36.842 { 00:22:36.842 "job": "ftl0", 00:22:36.842 "core_mask": "0x1", 00:22:36.842 "workload": "randwrite", 00:22:36.842 "status": "finished", 00:22:36.842 "queue_depth": 128, 00:22:36.842 "io_size": 4096, 00:22:36.842 "runtime": 4.015385, 00:22:36.842 "iops": 11485.07552824947, 00:22:36.842 "mibps": 44.86357628222449, 00:22:36.842 "io_failed": 0, 00:22:36.842 "io_timeout": 0, 00:22:36.842 "avg_latency_us": 11122.294184487806, 00:22:36.842 "min_latency_us": 223.71726907630523, 00:22:36.842 "max_latency_us": 33057.51646586345 00:22:36.842 } 00:22:36.842 ], 00:22:36.842 "core_count": 1 00:22:36.842 } 00:22:36.842 04:07:19 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:22:36.842 [2024-12-07 04:07:19.443743] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:22:36.842 Running I/O for 4 seconds... 00:22:39.153 6836.00 IOPS, 26.70 MiB/s [2024-12-07T04:07:22.457Z] 7134.00 IOPS, 27.87 MiB/s [2024-12-07T04:07:23.833Z] 7711.67 IOPS, 30.12 MiB/s [2024-12-07T04:07:23.833Z] 8009.00 IOPS, 31.29 MiB/s 00:22:41.097 Latency(us) 00:22:41.097 [2024-12-07T04:07:23.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.097 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:41.097 Verification LBA range: start 0x0 length 0x1400000 00:22:41.097 ftl0 : 4.01 8021.95 31.34 0.00 0.00 15910.56 259.91 34531.42 00:22:41.097 [2024-12-07T04:07:23.833Z] =================================================================================================================== 00:22:41.097 [2024-12-07T04:07:23.833Z] Total : 8021.95 31.34 0.00 0.00 15910.56 0.00 34531.42 00:22:41.097 { 00:22:41.097 "results": [ 00:22:41.097 { 00:22:41.097 "job": "ftl0", 00:22:41.097 "core_mask": "0x1", 00:22:41.097 "workload": "verify", 00:22:41.097 "status": "finished", 00:22:41.097 "verify_range": { 00:22:41.097 "start": 0, 00:22:41.097 "length": 20971520 00:22:41.097 }, 00:22:41.097 "queue_depth": 128, 00:22:41.097 "io_size": 4096, 00:22:41.097 "runtime": 4.009248, 00:22:41.097 "iops": 8021.953244099642, 00:22:41.097 "mibps": 31.335754859764226, 00:22:41.097 "io_failed": 0, 00:22:41.097 "io_timeout": 0, 00:22:41.097 "avg_latency_us": 15910.559798250273, 00:22:41.097 "min_latency_us": 259.906827309237, 00:22:41.097 "max_latency_us": 34531.41847389558 00:22:41.097 } 00:22:41.097 ], 00:22:41.097 "core_count": 1 00:22:41.097 } 00:22:41.097 [2024-12-07 04:07:23.470045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:22:41.097 04:07:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:22:41.097 [2024-12-07 04:07:23.672815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.098 [2024-12-07 04:07:23.672861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:41.098 [2024-12-07 04:07:23.672875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:41.098 [2024-12-07 04:07:23.672888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.098 [2024-12-07 04:07:23.672908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:41.098 [2024-12-07 04:07:23.676886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.098 [2024-12-07 04:07:23.676916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:41.098 [2024-12-07 04:07:23.676936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:22:41.098 [2024-12-07 04:07:23.676946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.098 [2024-12-07 04:07:23.678924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.098 [2024-12-07 04:07:23.678973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:41.098 [2024-12-07 04:07:23.678992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.939 ms 00:22:41.098 [2024-12-07 04:07:23.679003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:23.883383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:23.883430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:41.359 [2024-12-07 04:07:23.883452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 204.687 ms 00:22:41.359 [2024-12-07 04:07:23.883464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:23.888442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:23.888477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:41.359 [2024-12-07 04:07:23.888509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.928 ms 00:22:41.359 [2024-12-07 04:07:23.888524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:23.924065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:23.924112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:41.359 [2024-12-07 04:07:23.924130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.542 ms 00:22:41.359 [2024-12-07 04:07:23.924140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:23.945141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:23.945183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:41.359 [2024-12-07 04:07:23.945199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.991 ms 00:22:41.359 [2024-12-07 04:07:23.945209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:23.945370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:23.945387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:41.359 [2024-12-07 04:07:23.945404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:22:41.359 [2024-12-07 04:07:23.945413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:23.980217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:23.980256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:41.359 [2024-12-07 04:07:23.980271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.843 ms 00:22:41.359 [2024-12-07 04:07:23.980281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:24.014525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:24.014563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:41.359 [2024-12-07 04:07:24.014579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.259 ms 00:22:41.359 [2024-12-07 04:07:24.014588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:24.048052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:24.048092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:41.359 [2024-12-07 04:07:24.048107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.474 ms 00:22:41.359 [2024-12-07 04:07:24.048117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:24.081558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.359 [2024-12-07 04:07:24.081597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:41.359 [2024-12-07 04:07:24.081631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.405 ms 00:22:41.359 [2024-12-07 04:07:24.081641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.359 [2024-12-07 04:07:24.081695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:41.359 [2024-12-07 04:07:24.081715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:41.359 [2024-12-07 04:07:24.081852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.081998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.082989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.083002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:41.360 [2024-12-07 04:07:24.083021] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:41.361 [2024-12-07 04:07:24.083033] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 21aa3db6-10d8-4a98-a67a-16b207b9714a 00:22:41.361 [2024-12-07 04:07:24.083047] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:41.361 [2024-12-07 04:07:24.083060] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:41.361 [2024-12-07 04:07:24.083069] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:41.361 [2024-12-07 04:07:24.083082] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:41.361 [2024-12-07 04:07:24.083092] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:41.361 [2024-12-07 04:07:24.083105] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:41.361 [2024-12-07 04:07:24.083115] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:41.361 [2024-12-07 04:07:24.083129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:41.361 [2024-12-07 04:07:24.083138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:41.361 [2024-12-07 04:07:24.083151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.361 [2024-12-07 04:07:24.083161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:41.361 [2024-12-07 04:07:24.083175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:22:41.361 [2024-12-07 04:07:24.083185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.102692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.621 [2024-12-07 04:07:24.102727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:41.621 [2024-12-07 04:07:24.102759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.483 ms 00:22:41.621 [2024-12-07 04:07:24.102769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.103280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:41.621 [2024-12-07 04:07:24.103299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:41.621 [2024-12-07 04:07:24.103313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:22:41.621 [2024-12-07 04:07:24.103324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.155428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.621 [2024-12-07 04:07:24.155464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:41.621 [2024-12-07 04:07:24.155498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.621 [2024-12-07 04:07:24.155509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.155565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.621 [2024-12-07 04:07:24.155576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:41.621 [2024-12-07 04:07:24.155590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.621 [2024-12-07 04:07:24.155602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.155699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.621 [2024-12-07 04:07:24.155713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:41.621 [2024-12-07 04:07:24.155726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.621 [2024-12-07 04:07:24.155736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.155755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.621 [2024-12-07 04:07:24.155766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:41.621 [2024-12-07 04:07:24.155778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.621 [2024-12-07 04:07:24.155788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.621 [2024-12-07 04:07:24.274148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.621 [2024-12-07 04:07:24.274223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:41.621 [2024-12-07 04:07:24.274245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.621 [2024-12-07 04:07:24.274255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.881 [2024-12-07 04:07:24.369714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.881 [2024-12-07 04:07:24.369766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:41.881 [2024-12-07 04:07:24.369801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.881 [2024-12-07 04:07:24.369811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.881 [2024-12-07 04:07:24.369922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.881 [2024-12-07 04:07:24.369935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:41.881 [2024-12-07 04:07:24.369967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.881 [2024-12-07 04:07:24.369978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.881 [2024-12-07 04:07:24.370029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.881 [2024-12-07 04:07:24.370042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:41.881 [2024-12-07 04:07:24.370054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.881 [2024-12-07 04:07:24.370064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.881 [2024-12-07 04:07:24.370210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.881 [2024-12-07 04:07:24.370229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:41.881 [2024-12-07 04:07:24.370246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.881 [2024-12-07 04:07:24.370258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.881 [2024-12-07 04:07:24.370298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.881 [2024-12-07 04:07:24.370311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:41.881 [2024-12-07 04:07:24.370325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.881 [2024-12-07 04:07:24.370335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.881 [2024-12-07 04:07:24.370376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.881 [2024-12-07 04:07:24.370390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:41.881 [2024-12-07 04:07:24.370403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.881 [2024-12-07 04:07:24.370424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.882 [2024-12-07 04:07:24.370469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:41.882 [2024-12-07 04:07:24.370481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:41.882 [2024-12-07 04:07:24.370495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:41.882 [2024-12-07 04:07:24.370505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:41.882 [2024-12-07 04:07:24.370633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 698.907 ms, result 0 00:22:41.882 true 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77771 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77771 ']' 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77771 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77771 00:22:41.882 killing process with pid 77771 00:22:41.882 Received shutdown signal, test time was about 4.000000 seconds 00:22:41.882 00:22:41.882 Latency(us) 00:22:41.882 [2024-12-07T04:07:24.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.882 [2024-12-07T04:07:24.618Z] =================================================================================================================== 00:22:41.882 [2024-12-07T04:07:24.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77771' 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77771 00:22:41.882 04:07:24 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77771 00:22:46.078 Remove shared memory files 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:22:46.078 00:22:46.078 real 0m25.404s 00:22:46.078 user 0m27.892s 00:22:46.078 sys 0m1.261s 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.078 04:07:27 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:46.078 ************************************ 00:22:46.078 END TEST ftl_bdevperf 00:22:46.078 ************************************ 00:22:46.078 04:07:28 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:46.078 04:07:28 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:46.078 04:07:28 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.078 04:07:28 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:46.078 ************************************ 00:22:46.078 START TEST ftl_trim 00:22:46.078 ************************************ 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:22:46.078 * Looking for test storage... 00:22:46.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.078 04:07:28 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.078 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:46.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.078 --rc genhtml_branch_coverage=1 00:22:46.078 --rc genhtml_function_coverage=1 00:22:46.078 --rc genhtml_legend=1 00:22:46.078 --rc geninfo_all_blocks=1 00:22:46.078 --rc geninfo_unexecuted_blocks=1 00:22:46.078 00:22:46.079 ' 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:46.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.079 --rc genhtml_branch_coverage=1 00:22:46.079 --rc genhtml_function_coverage=1 00:22:46.079 --rc genhtml_legend=1 00:22:46.079 --rc geninfo_all_blocks=1 00:22:46.079 --rc geninfo_unexecuted_blocks=1 00:22:46.079 00:22:46.079 ' 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:46.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.079 --rc genhtml_branch_coverage=1 00:22:46.079 --rc genhtml_function_coverage=1 00:22:46.079 --rc genhtml_legend=1 00:22:46.079 --rc geninfo_all_blocks=1 00:22:46.079 --rc geninfo_unexecuted_blocks=1 00:22:46.079 00:22:46.079 ' 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:46.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.079 --rc genhtml_branch_coverage=1 00:22:46.079 --rc genhtml_function_coverage=1 00:22:46.079 --rc genhtml_legend=1 00:22:46.079 --rc geninfo_all_blocks=1 00:22:46.079 --rc geninfo_unexecuted_blocks=1 00:22:46.079 00:22:46.079 ' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78140 00:22:46.079 04:07:28 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78140 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78140 ']' 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.079 04:07:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:46.079 [2024-12-07 04:07:28.399998] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:22:46.079 [2024-12-07 04:07:28.400108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78140 ] 00:22:46.079 [2024-12-07 04:07:28.583342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:46.079 [2024-12-07 04:07:28.694836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.079 [2024-12-07 04:07:28.695020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.079 [2024-12-07 04:07:28.695062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.017 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.017 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:47.017 04:07:29 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:47.017 04:07:29 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:22:47.018 04:07:29 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:47.018 04:07:29 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:22:47.018 04:07:29 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:22:47.018 04:07:29 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:47.277 04:07:29 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:47.277 04:07:29 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:22:47.277 04:07:29 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:47.277 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:47.277 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:47.277 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:47.277 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:47.277 04:07:29 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:47.537 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:47.537 { 00:22:47.537 "name": "nvme0n1", 00:22:47.537 "aliases": [ 00:22:47.537 "7a23282d-548f-432f-95de-8b50bc3d06ff" 00:22:47.537 ], 00:22:47.537 "product_name": "NVMe disk", 00:22:47.537 "block_size": 4096, 00:22:47.537 "num_blocks": 1310720, 00:22:47.537 "uuid": "7a23282d-548f-432f-95de-8b50bc3d06ff", 00:22:47.537 "numa_id": -1, 00:22:47.537 "assigned_rate_limits": { 00:22:47.537 "rw_ios_per_sec": 0, 00:22:47.537 "rw_mbytes_per_sec": 0, 00:22:47.537 "r_mbytes_per_sec": 0, 00:22:47.537 "w_mbytes_per_sec": 0 00:22:47.537 }, 00:22:47.537 "claimed": true, 00:22:47.537 "claim_type": "read_many_write_one", 00:22:47.537 "zoned": false, 00:22:47.537 "supported_io_types": { 00:22:47.537 "read": true, 00:22:47.537 "write": true, 00:22:47.537 "unmap": true, 00:22:47.537 "flush": true, 00:22:47.537 "reset": true, 00:22:47.537 "nvme_admin": true, 00:22:47.537 "nvme_io": true, 00:22:47.537 "nvme_io_md": false, 00:22:47.537 "write_zeroes": true, 00:22:47.537 "zcopy": false, 00:22:47.537 "get_zone_info": false, 00:22:47.537 "zone_management": false, 00:22:47.537 "zone_append": false, 00:22:47.537 "compare": true, 00:22:47.537 "compare_and_write": false, 00:22:47.537 "abort": true, 00:22:47.537 "seek_hole": false, 00:22:47.537 "seek_data": false, 00:22:47.537 "copy": true, 00:22:47.537 "nvme_iov_md": false 00:22:47.537 }, 00:22:47.537 "driver_specific": { 00:22:47.537 "nvme": [ 00:22:47.537 { 00:22:47.537 "pci_address": "0000:00:11.0", 00:22:47.537 "trid": { 00:22:47.537 "trtype": "PCIe", 00:22:47.537 "traddr": "0000:00:11.0" 00:22:47.537 }, 00:22:47.537 "ctrlr_data": { 00:22:47.537 "cntlid": 0, 00:22:47.537 "vendor_id": "0x1b36", 00:22:47.537 "model_number": "QEMU NVMe Ctrl", 00:22:47.537 "serial_number": "12341", 00:22:47.537 "firmware_revision": "8.0.0", 00:22:47.537 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:47.537 "oacs": { 00:22:47.537 "security": 0, 00:22:47.537 "format": 1, 00:22:47.537 "firmware": 0, 00:22:47.537 "ns_manage": 1 00:22:47.537 }, 00:22:47.537 "multi_ctrlr": false, 00:22:47.537 "ana_reporting": false 00:22:47.537 }, 00:22:47.537 "vs": { 00:22:47.537 "nvme_version": "1.4" 00:22:47.537 }, 00:22:47.537 "ns_data": { 00:22:47.537 "id": 1, 00:22:47.537 "can_share": false 00:22:47.537 } 00:22:47.537 } 00:22:47.537 ], 00:22:47.537 "mp_policy": "active_passive" 00:22:47.537 } 00:22:47.537 } 00:22:47.537 ]' 00:22:47.538 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:47.538 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:47.538 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:47.538 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:47.538 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:47.538 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:22:47.538 04:07:30 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:22:47.538 04:07:30 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:47.538 04:07:30 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:22:47.538 04:07:30 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:47.538 04:07:30 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:47.797 04:07:30 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=27cad91a-3dd0-4350-b2e4-5ca3884868cb 00:22:47.797 04:07:30 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:22:47.797 04:07:30 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27cad91a-3dd0-4350-b2e4-5ca3884868cb 00:22:48.056 04:07:30 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:48.056 04:07:30 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=af39bbc6-c6f0-4389-b7ef-c7d8d025ef17 00:22:48.056 04:07:30 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u af39bbc6-c6f0-4389-b7ef-c7d8d025ef17 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:22:48.315 04:07:30 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.315 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.315 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:48.315 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:48.315 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:48.315 04:07:30 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.572 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:48.572 { 00:22:48.572 "name": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:48.572 "aliases": [ 00:22:48.572 "lvs/nvme0n1p0" 00:22:48.572 ], 00:22:48.572 "product_name": "Logical Volume", 00:22:48.572 "block_size": 4096, 00:22:48.572 "num_blocks": 26476544, 00:22:48.572 "uuid": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:48.572 "assigned_rate_limits": { 00:22:48.572 "rw_ios_per_sec": 0, 00:22:48.572 "rw_mbytes_per_sec": 0, 00:22:48.572 "r_mbytes_per_sec": 0, 00:22:48.572 "w_mbytes_per_sec": 0 00:22:48.572 }, 00:22:48.573 "claimed": false, 00:22:48.573 "zoned": false, 00:22:48.573 "supported_io_types": { 00:22:48.573 "read": true, 00:22:48.573 "write": true, 00:22:48.573 "unmap": true, 00:22:48.573 "flush": false, 00:22:48.573 "reset": true, 00:22:48.573 "nvme_admin": false, 00:22:48.573 "nvme_io": false, 00:22:48.573 "nvme_io_md": false, 00:22:48.573 "write_zeroes": true, 00:22:48.573 "zcopy": false, 00:22:48.573 "get_zone_info": false, 00:22:48.573 "zone_management": false, 00:22:48.573 "zone_append": false, 00:22:48.573 "compare": false, 00:22:48.573 "compare_and_write": false, 00:22:48.573 "abort": false, 00:22:48.573 "seek_hole": true, 00:22:48.573 "seek_data": true, 00:22:48.573 "copy": false, 00:22:48.573 "nvme_iov_md": false 00:22:48.573 }, 00:22:48.573 "driver_specific": { 00:22:48.573 "lvol": { 00:22:48.573 "lvol_store_uuid": "af39bbc6-c6f0-4389-b7ef-c7d8d025ef17", 00:22:48.573 "base_bdev": "nvme0n1", 00:22:48.573 "thin_provision": true, 00:22:48.573 "num_allocated_clusters": 0, 00:22:48.573 "snapshot": false, 00:22:48.573 "clone": false, 00:22:48.573 "esnap_clone": false 00:22:48.573 } 00:22:48.573 } 00:22:48.573 } 00:22:48.573 ]' 00:22:48.573 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:48.573 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:48.573 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:48.573 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:48.573 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:48.573 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:48.573 04:07:31 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:22:48.573 04:07:31 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:22:48.573 04:07:31 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:48.830 04:07:31 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:48.830 04:07:31 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:48.830 04:07:31 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.830 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:48.830 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:48.830 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:48.830 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:48.830 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:49.089 { 00:22:49.089 "name": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:49.089 "aliases": [ 00:22:49.089 "lvs/nvme0n1p0" 00:22:49.089 ], 00:22:49.089 "product_name": "Logical Volume", 00:22:49.089 "block_size": 4096, 00:22:49.089 "num_blocks": 26476544, 00:22:49.089 "uuid": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:49.089 "assigned_rate_limits": { 00:22:49.089 "rw_ios_per_sec": 0, 00:22:49.089 "rw_mbytes_per_sec": 0, 00:22:49.089 "r_mbytes_per_sec": 0, 00:22:49.089 "w_mbytes_per_sec": 0 00:22:49.089 }, 00:22:49.089 "claimed": false, 00:22:49.089 "zoned": false, 00:22:49.089 "supported_io_types": { 00:22:49.089 "read": true, 00:22:49.089 "write": true, 00:22:49.089 "unmap": true, 00:22:49.089 "flush": false, 00:22:49.089 "reset": true, 00:22:49.089 "nvme_admin": false, 00:22:49.089 "nvme_io": false, 00:22:49.089 "nvme_io_md": false, 00:22:49.089 "write_zeroes": true, 00:22:49.089 "zcopy": false, 00:22:49.089 "get_zone_info": false, 00:22:49.089 "zone_management": false, 00:22:49.089 "zone_append": false, 00:22:49.089 "compare": false, 00:22:49.089 "compare_and_write": false, 00:22:49.089 "abort": false, 00:22:49.089 "seek_hole": true, 00:22:49.089 "seek_data": true, 00:22:49.089 "copy": false, 00:22:49.089 "nvme_iov_md": false 00:22:49.089 }, 00:22:49.089 "driver_specific": { 00:22:49.089 "lvol": { 00:22:49.089 "lvol_store_uuid": "af39bbc6-c6f0-4389-b7ef-c7d8d025ef17", 00:22:49.089 "base_bdev": "nvme0n1", 00:22:49.089 "thin_provision": true, 00:22:49.089 "num_allocated_clusters": 0, 00:22:49.089 "snapshot": false, 00:22:49.089 "clone": false, 00:22:49.089 "esnap_clone": false 00:22:49.089 } 00:22:49.089 } 00:22:49.089 } 00:22:49.089 ]' 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:49.089 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:49.089 04:07:31 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:22:49.089 04:07:31 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:49.348 04:07:31 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:22:49.348 04:07:31 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:22:49.348 04:07:31 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:49.348 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:49.348 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:49.348 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:22:49.348 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:22:49.348 04:07:31 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:49.607 { 00:22:49.607 "name": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:49.607 "aliases": [ 00:22:49.607 "lvs/nvme0n1p0" 00:22:49.607 ], 00:22:49.607 "product_name": "Logical Volume", 00:22:49.607 "block_size": 4096, 00:22:49.607 "num_blocks": 26476544, 00:22:49.607 "uuid": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:49.607 "assigned_rate_limits": { 00:22:49.607 "rw_ios_per_sec": 0, 00:22:49.607 "rw_mbytes_per_sec": 0, 00:22:49.607 "r_mbytes_per_sec": 0, 00:22:49.607 "w_mbytes_per_sec": 0 00:22:49.607 }, 00:22:49.607 "claimed": false, 00:22:49.607 "zoned": false, 00:22:49.607 "supported_io_types": { 00:22:49.607 "read": true, 00:22:49.607 "write": true, 00:22:49.607 "unmap": true, 00:22:49.607 "flush": false, 00:22:49.607 "reset": true, 00:22:49.607 "nvme_admin": false, 00:22:49.607 "nvme_io": false, 00:22:49.607 "nvme_io_md": false, 00:22:49.607 "write_zeroes": true, 00:22:49.607 "zcopy": false, 00:22:49.607 "get_zone_info": false, 00:22:49.607 "zone_management": false, 00:22:49.607 "zone_append": false, 00:22:49.607 "compare": false, 00:22:49.607 "compare_and_write": false, 00:22:49.607 "abort": false, 00:22:49.607 "seek_hole": true, 00:22:49.607 "seek_data": true, 00:22:49.607 "copy": false, 00:22:49.607 "nvme_iov_md": false 00:22:49.607 }, 00:22:49.607 "driver_specific": { 00:22:49.607 "lvol": { 00:22:49.607 "lvol_store_uuid": "af39bbc6-c6f0-4389-b7ef-c7d8d025ef17", 00:22:49.607 "base_bdev": "nvme0n1", 00:22:49.607 "thin_provision": true, 00:22:49.607 "num_allocated_clusters": 0, 00:22:49.607 "snapshot": false, 00:22:49.607 "clone": false, 00:22:49.607 "esnap_clone": false 00:22:49.607 } 00:22:49.607 } 00:22:49.607 } 00:22:49.607 ]' 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:49.607 04:07:32 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:22:49.607 04:07:32 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:22:49.607 04:07:32 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 921d341e-b6c3-476a-9fa7-1ccd0b1beb53 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:22:49.867 [2024-12-07 04:07:32.464156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.867 [2024-12-07 04:07:32.464205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:49.867 [2024-12-07 04:07:32.464243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:49.867 [2024-12-07 04:07:32.464254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.867 [2024-12-07 04:07:32.467737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.867 [2024-12-07 04:07:32.467780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:49.867 [2024-12-07 04:07:32.467796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.436 ms 00:22:49.867 [2024-12-07 04:07:32.467806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.867 [2024-12-07 04:07:32.467973] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:49.868 [2024-12-07 04:07:32.469044] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:49.868 [2024-12-07 04:07:32.469083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.469095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:49.868 [2024-12-07 04:07:32.469109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.123 ms 00:22:49.868 [2024-12-07 04:07:32.469121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.469310] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:22:49.868 [2024-12-07 04:07:32.470742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.470779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:49.868 [2024-12-07 04:07:32.470792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:49.868 [2024-12-07 04:07:32.470806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.478294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.478330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:49.868 [2024-12-07 04:07:32.478346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.335 ms 00:22:49.868 [2024-12-07 04:07:32.478359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.478534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.478554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:49.868 [2024-12-07 04:07:32.478566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:22:49.868 [2024-12-07 04:07:32.478585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.478646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.478661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:49.868 [2024-12-07 04:07:32.478673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:49.868 [2024-12-07 04:07:32.478690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.478749] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:49.868 [2024-12-07 04:07:32.483836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.483872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:49.868 [2024-12-07 04:07:32.483916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.098 ms 00:22:49.868 [2024-12-07 04:07:32.483926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.484049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.484079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:49.868 [2024-12-07 04:07:32.484105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:49.868 [2024-12-07 04:07:32.484115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.484168] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:49.868 [2024-12-07 04:07:32.484318] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:49.868 [2024-12-07 04:07:32.484339] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:49.868 [2024-12-07 04:07:32.484353] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:49.868 [2024-12-07 04:07:32.484370] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:49.868 [2024-12-07 04:07:32.484382] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:49.868 [2024-12-07 04:07:32.484397] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:49.868 [2024-12-07 04:07:32.484408] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:49.868 [2024-12-07 04:07:32.484423] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:49.868 [2024-12-07 04:07:32.484436] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:49.868 [2024-12-07 04:07:32.484450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.484461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:49.868 [2024-12-07 04:07:32.484474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:22:49.868 [2024-12-07 04:07:32.484484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.484597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.868 [2024-12-07 04:07:32.484609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:49.868 [2024-12-07 04:07:32.484623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:22:49.868 [2024-12-07 04:07:32.484633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.868 [2024-12-07 04:07:32.484793] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:49.868 [2024-12-07 04:07:32.484808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:49.868 [2024-12-07 04:07:32.484821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.868 [2024-12-07 04:07:32.484831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.868 [2024-12-07 04:07:32.484845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:49.868 [2024-12-07 04:07:32.484856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:49.868 [2024-12-07 04:07:32.484870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:49.868 [2024-12-07 04:07:32.484879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:49.868 [2024-12-07 04:07:32.484891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:49.868 [2024-12-07 04:07:32.484901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.868 [2024-12-07 04:07:32.484913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:49.868 [2024-12-07 04:07:32.484924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:49.868 [2024-12-07 04:07:32.484937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:49.868 [2024-12-07 04:07:32.484947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:49.868 [2024-12-07 04:07:32.484970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:49.868 [2024-12-07 04:07:32.484981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.868 [2024-12-07 04:07:32.484996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:49.868 [2024-12-07 04:07:32.485007] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:49.868 [2024-12-07 04:07:32.485020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.868 [2024-12-07 04:07:32.485030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:49.868 [2024-12-07 04:07:32.485042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:49.868 [2024-12-07 04:07:32.485052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.868 [2024-12-07 04:07:32.485064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:49.868 [2024-12-07 04:07:32.485074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:49.868 [2024-12-07 04:07:32.485086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.868 [2024-12-07 04:07:32.485095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:49.868 [2024-12-07 04:07:32.485107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:49.868 [2024-12-07 04:07:32.485117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.868 [2024-12-07 04:07:32.485129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:49.868 [2024-12-07 04:07:32.485139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:49.868 [2024-12-07 04:07:32.485150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:49.868 [2024-12-07 04:07:32.485159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:49.868 [2024-12-07 04:07:32.485174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:49.868 [2024-12-07 04:07:32.485183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.869 [2024-12-07 04:07:32.485196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:49.869 [2024-12-07 04:07:32.485205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:49.869 [2024-12-07 04:07:32.485217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:49.869 [2024-12-07 04:07:32.485226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:49.869 [2024-12-07 04:07:32.485239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:49.869 [2024-12-07 04:07:32.485248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.869 [2024-12-07 04:07:32.485260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:49.869 [2024-12-07 04:07:32.485270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:49.869 [2024-12-07 04:07:32.485281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.869 [2024-12-07 04:07:32.485291] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:49.869 [2024-12-07 04:07:32.485304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:49.869 [2024-12-07 04:07:32.485314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:49.869 [2024-12-07 04:07:32.485327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:49.869 [2024-12-07 04:07:32.485337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:49.869 [2024-12-07 04:07:32.485355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:49.869 [2024-12-07 04:07:32.485365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:49.869 [2024-12-07 04:07:32.485377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:49.869 [2024-12-07 04:07:32.485387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:49.869 [2024-12-07 04:07:32.485399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:49.869 [2024-12-07 04:07:32.485410] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:49.869 [2024-12-07 04:07:32.485426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:49.869 [2024-12-07 04:07:32.485454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:49.869 [2024-12-07 04:07:32.485465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:49.869 [2024-12-07 04:07:32.485478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:49.869 [2024-12-07 04:07:32.485490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:49.869 [2024-12-07 04:07:32.485503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:49.869 [2024-12-07 04:07:32.485514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:49.869 [2024-12-07 04:07:32.485527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:49.869 [2024-12-07 04:07:32.485537] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:49.869 [2024-12-07 04:07:32.485555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485589] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:49.869 [2024-12-07 04:07:32.485612] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:49.869 [2024-12-07 04:07:32.485630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485641] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:49.869 [2024-12-07 04:07:32.485655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:49.869 [2024-12-07 04:07:32.485665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:49.869 [2024-12-07 04:07:32.485679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:49.869 [2024-12-07 04:07:32.485690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:49.869 [2024-12-07 04:07:32.485704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:49.869 [2024-12-07 04:07:32.485715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:22:49.869 [2024-12-07 04:07:32.485727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:49.869 [2024-12-07 04:07:32.485857] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:49.869 [2024-12-07 04:07:32.485877] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:54.064 [2024-12-07 04:07:36.164051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.164136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:54.064 [2024-12-07 04:07:36.164153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3684.162 ms 00:22:54.064 [2024-12-07 04:07:36.164168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.201738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.201797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.064 [2024-12-07 04:07:36.201813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.138 ms 00:22:54.064 [2024-12-07 04:07:36.201828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.202013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.202032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:54.064 [2024-12-07 04:07:36.202062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:22:54.064 [2024-12-07 04:07:36.202079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.259666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.259712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.064 [2024-12-07 04:07:36.259727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.624 ms 00:22:54.064 [2024-12-07 04:07:36.259742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.259884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.259901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.064 [2024-12-07 04:07:36.259913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:54.064 [2024-12-07 04:07:36.259926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.260398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.260441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.064 [2024-12-07 04:07:36.260454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:22:54.064 [2024-12-07 04:07:36.260467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.260601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.260616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.064 [2024-12-07 04:07:36.260642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:54.064 [2024-12-07 04:07:36.260659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.281914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.281982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.064 [2024-12-07 04:07:36.281998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.236 ms 00:22:54.064 [2024-12-07 04:07:36.282011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.294887] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:54.064 [2024-12-07 04:07:36.311339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.311390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:54.064 [2024-12-07 04:07:36.311407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.212 ms 00:22:54.064 [2024-12-07 04:07:36.311418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.415905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.415969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:54.064 [2024-12-07 04:07:36.415990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.519 ms 00:22:54.064 [2024-12-07 04:07:36.416001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.416268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.416291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:54.064 [2024-12-07 04:07:36.416308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:22:54.064 [2024-12-07 04:07:36.416319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.452614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.452654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:54.064 [2024-12-07 04:07:36.452696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.287 ms 00:22:54.064 [2024-12-07 04:07:36.452707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.488955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.488991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:54.064 [2024-12-07 04:07:36.489008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.188 ms 00:22:54.064 [2024-12-07 04:07:36.489019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.489826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.489858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:54.064 [2024-12-07 04:07:36.489873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:22:54.064 [2024-12-07 04:07:36.489883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.611581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.611658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:54.064 [2024-12-07 04:07:36.611699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.806 ms 00:22:54.064 [2024-12-07 04:07:36.611711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.650157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.650223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:54.064 [2024-12-07 04:07:36.650242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.359 ms 00:22:54.064 [2024-12-07 04:07:36.650254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.687190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.687232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:54.064 [2024-12-07 04:07:36.687249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.884 ms 00:22:54.064 [2024-12-07 04:07:36.687260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.723058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.723112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:54.064 [2024-12-07 04:07:36.723146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.724 ms 00:22:54.064 [2024-12-07 04:07:36.723156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.723267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.723285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:54.064 [2024-12-07 04:07:36.723302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:54.064 [2024-12-07 04:07:36.723313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.064 [2024-12-07 04:07:36.723427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-12-07 04:07:36.723444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:54.064 [2024-12-07 04:07:36.723458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:22:54.065 [2024-12-07 04:07:36.723468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-12-07 04:07:36.724498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:54.065 [2024-12-07 04:07:36.728821] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4267.008 ms, result 0 00:22:54.065 [2024-12-07 04:07:36.729840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:54.065 { 00:22:54.065 "name": "ftl0", 00:22:54.065 "uuid": "03b91864-17ea-4316-91d7-3cf42d3b8eda" 00:22:54.065 } 00:22:54.065 04:07:36 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:22:54.065 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:22:54.065 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:22:54.065 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:22:54.065 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:22:54.065 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:22:54.065 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:54.326 04:07:36 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:22:54.587 [ 00:22:54.587 { 00:22:54.587 "name": "ftl0", 00:22:54.587 "aliases": [ 00:22:54.587 "03b91864-17ea-4316-91d7-3cf42d3b8eda" 00:22:54.587 ], 00:22:54.587 "product_name": "FTL disk", 00:22:54.587 "block_size": 4096, 00:22:54.587 "num_blocks": 23592960, 00:22:54.587 "uuid": "03b91864-17ea-4316-91d7-3cf42d3b8eda", 00:22:54.587 "assigned_rate_limits": { 00:22:54.587 "rw_ios_per_sec": 0, 00:22:54.587 "rw_mbytes_per_sec": 0, 00:22:54.587 "r_mbytes_per_sec": 0, 00:22:54.587 "w_mbytes_per_sec": 0 00:22:54.587 }, 00:22:54.587 "claimed": false, 00:22:54.587 "zoned": false, 00:22:54.587 "supported_io_types": { 00:22:54.587 "read": true, 00:22:54.587 "write": true, 00:22:54.587 "unmap": true, 00:22:54.587 "flush": true, 00:22:54.587 "reset": false, 00:22:54.587 "nvme_admin": false, 00:22:54.587 "nvme_io": false, 00:22:54.587 "nvme_io_md": false, 00:22:54.587 "write_zeroes": true, 00:22:54.587 "zcopy": false, 00:22:54.587 "get_zone_info": false, 00:22:54.587 "zone_management": false, 00:22:54.587 "zone_append": false, 00:22:54.587 "compare": false, 00:22:54.587 "compare_and_write": false, 00:22:54.587 "abort": false, 00:22:54.587 "seek_hole": false, 00:22:54.587 "seek_data": false, 00:22:54.587 "copy": false, 00:22:54.587 "nvme_iov_md": false 00:22:54.587 }, 00:22:54.587 "driver_specific": { 00:22:54.587 "ftl": { 00:22:54.587 "base_bdev": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:54.587 "cache": "nvc0n1p0" 00:22:54.587 } 00:22:54.587 } 00:22:54.587 } 00:22:54.587 ] 00:22:54.587 04:07:37 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:22:54.587 04:07:37 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:22:54.587 04:07:37 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:54.848 04:07:37 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:22:54.848 04:07:37 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:22:55.108 04:07:37 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:22:55.108 { 00:22:55.108 "name": "ftl0", 00:22:55.108 "aliases": [ 00:22:55.108 "03b91864-17ea-4316-91d7-3cf42d3b8eda" 00:22:55.108 ], 00:22:55.108 "product_name": "FTL disk", 00:22:55.108 "block_size": 4096, 00:22:55.108 "num_blocks": 23592960, 00:22:55.108 "uuid": "03b91864-17ea-4316-91d7-3cf42d3b8eda", 00:22:55.108 "assigned_rate_limits": { 00:22:55.108 "rw_ios_per_sec": 0, 00:22:55.108 "rw_mbytes_per_sec": 0, 00:22:55.108 "r_mbytes_per_sec": 0, 00:22:55.108 "w_mbytes_per_sec": 0 00:22:55.108 }, 00:22:55.108 "claimed": false, 00:22:55.108 "zoned": false, 00:22:55.108 "supported_io_types": { 00:22:55.108 "read": true, 00:22:55.108 "write": true, 00:22:55.108 "unmap": true, 00:22:55.108 "flush": true, 00:22:55.108 "reset": false, 00:22:55.108 "nvme_admin": false, 00:22:55.108 "nvme_io": false, 00:22:55.108 "nvme_io_md": false, 00:22:55.108 "write_zeroes": true, 00:22:55.108 "zcopy": false, 00:22:55.108 "get_zone_info": false, 00:22:55.108 "zone_management": false, 00:22:55.108 "zone_append": false, 00:22:55.108 "compare": false, 00:22:55.108 "compare_and_write": false, 00:22:55.108 "abort": false, 00:22:55.108 "seek_hole": false, 00:22:55.108 "seek_data": false, 00:22:55.108 "copy": false, 00:22:55.108 "nvme_iov_md": false 00:22:55.108 }, 00:22:55.108 "driver_specific": { 00:22:55.108 "ftl": { 00:22:55.108 "base_bdev": "921d341e-b6c3-476a-9fa7-1ccd0b1beb53", 00:22:55.108 "cache": "nvc0n1p0" 00:22:55.108 } 00:22:55.108 } 00:22:55.108 } 00:22:55.108 ]' 00:22:55.108 04:07:37 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:22:55.108 04:07:37 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:22:55.108 04:07:37 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:55.108 [2024-12-07 04:07:37.825139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.108 [2024-12-07 04:07:37.825196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:55.108 [2024-12-07 04:07:37.825216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:55.108 [2024-12-07 04:07:37.825233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.108 [2024-12-07 04:07:37.825290] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:55.108 [2024-12-07 04:07:37.829479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.108 [2024-12-07 04:07:37.829515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:55.108 [2024-12-07 04:07:37.829535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.173 ms 00:22:55.108 [2024-12-07 04:07:37.829545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.108 [2024-12-07 04:07:37.830568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.108 [2024-12-07 04:07:37.830597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:55.108 [2024-12-07 04:07:37.830612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.942 ms 00:22:55.108 [2024-12-07 04:07:37.830623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.108 [2024-12-07 04:07:37.833442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.108 [2024-12-07 04:07:37.833469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:55.108 [2024-12-07 04:07:37.833484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.776 ms 00:22:55.108 [2024-12-07 04:07:37.833495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.108 [2024-12-07 04:07:37.839288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.108 [2024-12-07 04:07:37.839321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:55.108 [2024-12-07 04:07:37.839337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.743 ms 00:22:55.108 [2024-12-07 04:07:37.839348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:37.877509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:37.877548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:55.369 [2024-12-07 04:07:37.877568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.075 ms 00:22:55.369 [2024-12-07 04:07:37.877579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:37.899839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:37.899878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:55.369 [2024-12-07 04:07:37.899911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.176 ms 00:22:55.369 [2024-12-07 04:07:37.899925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:37.900249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:37.900280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:55.369 [2024-12-07 04:07:37.900295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:22:55.369 [2024-12-07 04:07:37.900307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:37.937089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:37.937126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:55.369 [2024-12-07 04:07:37.937159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.776 ms 00:22:55.369 [2024-12-07 04:07:37.937168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:37.973710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:37.973745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:55.369 [2024-12-07 04:07:37.973764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.483 ms 00:22:55.369 [2024-12-07 04:07:37.973774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:38.009400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:38.009437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:55.369 [2024-12-07 04:07:38.009469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.574 ms 00:22:55.369 [2024-12-07 04:07:38.009479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:38.045005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.369 [2024-12-07 04:07:38.045040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:55.369 [2024-12-07 04:07:38.045055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.386 ms 00:22:55.369 [2024-12-07 04:07:38.045065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.369 [2024-12-07 04:07:38.045175] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:55.369 [2024-12-07 04:07:38.045193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:55.369 [2024-12-07 04:07:38.045609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.045999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:55.370 [2024-12-07 04:07:38.046517] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:55.370 [2024-12-07 04:07:38.046533] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:22:55.370 [2024-12-07 04:07:38.046545] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:55.370 [2024-12-07 04:07:38.046558] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:55.370 [2024-12-07 04:07:38.046569] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:55.370 [2024-12-07 04:07:38.046585] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:55.370 [2024-12-07 04:07:38.046595] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:55.370 [2024-12-07 04:07:38.046608] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:55.370 [2024-12-07 04:07:38.046618] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:55.370 [2024-12-07 04:07:38.046630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:55.370 [2024-12-07 04:07:38.046639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:55.370 [2024-12-07 04:07:38.046652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.370 [2024-12-07 04:07:38.046662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:55.370 [2024-12-07 04:07:38.046676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:22:55.370 [2024-12-07 04:07:38.046686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.370 [2024-12-07 04:07:38.066880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.370 [2024-12-07 04:07:38.066917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:55.370 [2024-12-07 04:07:38.066960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.161 ms 00:22:55.370 [2024-12-07 04:07:38.066971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.370 [2024-12-07 04:07:38.067570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:55.370 [2024-12-07 04:07:38.067595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:55.370 [2024-12-07 04:07:38.067610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:22:55.370 [2024-12-07 04:07:38.067620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.630 [2024-12-07 04:07:38.135955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.630 [2024-12-07 04:07:38.135993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:55.630 [2024-12-07 04:07:38.136009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.630 [2024-12-07 04:07:38.136020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.630 [2024-12-07 04:07:38.136166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.630 [2024-12-07 04:07:38.136179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:55.630 [2024-12-07 04:07:38.136193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.630 [2024-12-07 04:07:38.136203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.630 [2024-12-07 04:07:38.136297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.630 [2024-12-07 04:07:38.136311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:55.630 [2024-12-07 04:07:38.136332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.630 [2024-12-07 04:07:38.136342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.630 [2024-12-07 04:07:38.136393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.630 [2024-12-07 04:07:38.136405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:55.630 [2024-12-07 04:07:38.136419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.630 [2024-12-07 04:07:38.136429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.630 [2024-12-07 04:07:38.268056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.630 [2024-12-07 04:07:38.268117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:55.630 [2024-12-07 04:07:38.268135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.630 [2024-12-07 04:07:38.268146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.368711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.368763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:55.890 [2024-12-07 04:07:38.368783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.368794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.368976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.368991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:55.890 [2024-12-07 04:07:38.369008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.369022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.369131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.369144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:55.890 [2024-12-07 04:07:38.369156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.369166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.369355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.369369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:55.890 [2024-12-07 04:07:38.369383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.369396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.369469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.369482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:55.890 [2024-12-07 04:07:38.369496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.369506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.369588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.369600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:55.890 [2024-12-07 04:07:38.369618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.369628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.369710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:55.890 [2024-12-07 04:07:38.369722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:55.890 [2024-12-07 04:07:38.369735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:55.890 [2024-12-07 04:07:38.369746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:55.890 [2024-12-07 04:07:38.370029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 545.768 ms, result 0 00:22:55.890 true 00:22:55.890 04:07:38 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78140 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78140 ']' 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78140 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78140 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.890 killing process with pid 78140 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78140' 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78140 00:22:55.890 04:07:38 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78140 00:22:58.430 04:07:40 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:22:59.369 65536+0 records in 00:22:59.369 65536+0 records out 00:22:59.369 268435456 bytes (268 MB, 256 MiB) copied, 0.96364 s, 279 MB/s 00:22:59.369 04:07:41 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:59.369 [2024-12-07 04:07:41.848120] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:22:59.369 [2024-12-07 04:07:41.848233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78339 ] 00:22:59.369 [2024-12-07 04:07:42.028750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.628 [2024-12-07 04:07:42.134135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.889 [2024-12-07 04:07:42.483785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:59.889 [2024-12-07 04:07:42.483853] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:00.150 [2024-12-07 04:07:42.645260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.645308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:00.150 [2024-12-07 04:07:42.645323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:00.150 [2024-12-07 04:07:42.645333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.648406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.648444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:00.150 [2024-12-07 04:07:42.648472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.059 ms 00:23:00.150 [2024-12-07 04:07:42.648482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.648573] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:00.150 [2024-12-07 04:07:42.649611] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:00.150 [2024-12-07 04:07:42.649645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.649657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:00.150 [2024-12-07 04:07:42.649668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:23:00.150 [2024-12-07 04:07:42.649679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.651167] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:00.150 [2024-12-07 04:07:42.669706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.669742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:00.150 [2024-12-07 04:07:42.669757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.570 ms 00:23:00.150 [2024-12-07 04:07:42.669767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.669862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.669876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:00.150 [2024-12-07 04:07:42.669887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:00.150 [2024-12-07 04:07:42.669897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.676618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.676647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:00.150 [2024-12-07 04:07:42.676658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.677 ms 00:23:00.150 [2024-12-07 04:07:42.676669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.676761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.676774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:00.150 [2024-12-07 04:07:42.676786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:00.150 [2024-12-07 04:07:42.676798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.676825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.676836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:00.150 [2024-12-07 04:07:42.676847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:00.150 [2024-12-07 04:07:42.676856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.676878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:00.150 [2024-12-07 04:07:42.681406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.681438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:00.150 [2024-12-07 04:07:42.681450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.540 ms 00:23:00.150 [2024-12-07 04:07:42.681459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.681525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.681537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:00.150 [2024-12-07 04:07:42.681547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:00.150 [2024-12-07 04:07:42.681561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.681579] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:00.150 [2024-12-07 04:07:42.681609] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:00.150 [2024-12-07 04:07:42.681648] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:00.150 [2024-12-07 04:07:42.681667] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:00.150 [2024-12-07 04:07:42.681785] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:00.150 [2024-12-07 04:07:42.681800] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:00.150 [2024-12-07 04:07:42.681818] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:00.150 [2024-12-07 04:07:42.681831] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:00.150 [2024-12-07 04:07:42.681844] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:00.150 [2024-12-07 04:07:42.681855] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:00.150 [2024-12-07 04:07:42.681866] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:00.150 [2024-12-07 04:07:42.681876] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:00.150 [2024-12-07 04:07:42.681886] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:00.150 [2024-12-07 04:07:42.681897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.681908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:00.150 [2024-12-07 04:07:42.681919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:23:00.150 [2024-12-07 04:07:42.681929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.682021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.150 [2024-12-07 04:07:42.682035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:00.150 [2024-12-07 04:07:42.682046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:00.150 [2024-12-07 04:07:42.682056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.150 [2024-12-07 04:07:42.682147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:00.150 [2024-12-07 04:07:42.682161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:00.150 [2024-12-07 04:07:42.682173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.150 [2024-12-07 04:07:42.682183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.150 [2024-12-07 04:07:42.682195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:00.150 [2024-12-07 04:07:42.682212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:00.150 [2024-12-07 04:07:42.682222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:00.150 [2024-12-07 04:07:42.682232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:00.150 [2024-12-07 04:07:42.682242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:00.150 [2024-12-07 04:07:42.682252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.150 [2024-12-07 04:07:42.682262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:00.151 [2024-12-07 04:07:42.682282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:00.151 [2024-12-07 04:07:42.682292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:00.151 [2024-12-07 04:07:42.682301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:00.151 [2024-12-07 04:07:42.682311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:00.151 [2024-12-07 04:07:42.682320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:00.151 [2024-12-07 04:07:42.682339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:00.151 [2024-12-07 04:07:42.682368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:00.151 [2024-12-07 04:07:42.682396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:00.151 [2024-12-07 04:07:42.682424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:00.151 [2024-12-07 04:07:42.682453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682462] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:00.151 [2024-12-07 04:07:42.682481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.151 [2024-12-07 04:07:42.682499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:00.151 [2024-12-07 04:07:42.682508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:00.151 [2024-12-07 04:07:42.682517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:00.151 [2024-12-07 04:07:42.682526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:00.151 [2024-12-07 04:07:42.682535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:00.151 [2024-12-07 04:07:42.682544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:00.151 [2024-12-07 04:07:42.682563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:00.151 [2024-12-07 04:07:42.682573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682582] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:00.151 [2024-12-07 04:07:42.682595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:00.151 [2024-12-07 04:07:42.682605] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:00.151 [2024-12-07 04:07:42.682625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:00.151 [2024-12-07 04:07:42.682636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:00.151 [2024-12-07 04:07:42.682646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:00.151 [2024-12-07 04:07:42.682655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:00.151 [2024-12-07 04:07:42.682665] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:00.151 [2024-12-07 04:07:42.682674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:00.151 [2024-12-07 04:07:42.682685] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:00.151 [2024-12-07 04:07:42.682697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:00.151 [2024-12-07 04:07:42.682719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:00.151 [2024-12-07 04:07:42.682730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:00.151 [2024-12-07 04:07:42.682740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:00.151 [2024-12-07 04:07:42.682751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:00.151 [2024-12-07 04:07:42.682761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:00.151 [2024-12-07 04:07:42.682771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:00.151 [2024-12-07 04:07:42.682782] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:00.151 [2024-12-07 04:07:42.682792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:00.151 [2024-12-07 04:07:42.682804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:00.151 [2024-12-07 04:07:42.682856] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:00.151 [2024-12-07 04:07:42.682867] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682882] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:00.151 [2024-12-07 04:07:42.682893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:00.151 [2024-12-07 04:07:42.682903] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:00.151 [2024-12-07 04:07:42.682914] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:00.151 [2024-12-07 04:07:42.682925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.151 [2024-12-07 04:07:42.682945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:00.151 [2024-12-07 04:07:42.682956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.834 ms 00:23:00.151 [2024-12-07 04:07:42.682971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.151 [2024-12-07 04:07:42.721981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.151 [2024-12-07 04:07:42.722017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:00.151 [2024-12-07 04:07:42.722047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.020 ms 00:23:00.151 [2024-12-07 04:07:42.722061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.722172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.722185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:00.152 [2024-12-07 04:07:42.722197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:23:00.152 [2024-12-07 04:07:42.722215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.791028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.791067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:00.152 [2024-12-07 04:07:42.791082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.901 ms 00:23:00.152 [2024-12-07 04:07:42.791093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.791179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.791193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:00.152 [2024-12-07 04:07:42.791205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:00.152 [2024-12-07 04:07:42.791216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.791663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.791685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:00.152 [2024-12-07 04:07:42.791700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:23:00.152 [2024-12-07 04:07:42.791710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.791827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.791840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:00.152 [2024-12-07 04:07:42.791851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:00.152 [2024-12-07 04:07:42.791861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.810757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.810791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:00.152 [2024-12-07 04:07:42.810820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.904 ms 00:23:00.152 [2024-12-07 04:07:42.810831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.829599] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:00.152 [2024-12-07 04:07:42.829638] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:00.152 [2024-12-07 04:07:42.829669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.829680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:00.152 [2024-12-07 04:07:42.829692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.766 ms 00:23:00.152 [2024-12-07 04:07:42.829702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.857256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.857308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:00.152 [2024-12-07 04:07:42.857322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.520 ms 00:23:00.152 [2024-12-07 04:07:42.857333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.152 [2024-12-07 04:07:42.874782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.152 [2024-12-07 04:07:42.874817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:00.152 [2024-12-07 04:07:42.874845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.403 ms 00:23:00.152 [2024-12-07 04:07:42.874856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:42.892701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:42.892735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:00.411 [2024-12-07 04:07:42.892763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.777 ms 00:23:00.411 [2024-12-07 04:07:42.892773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:42.893584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:42.893616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:00.411 [2024-12-07 04:07:42.893629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:23:00.411 [2024-12-07 04:07:42.893639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:42.975924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:42.975988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:00.411 [2024-12-07 04:07:42.976004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.388 ms 00:23:00.411 [2024-12-07 04:07:42.976015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:42.985976] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:00.411 [2024-12-07 04:07:43.001435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.001478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:00.411 [2024-12-07 04:07:43.001494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.372 ms 00:23:00.411 [2024-12-07 04:07:43.001510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.001617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.001630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:00.411 [2024-12-07 04:07:43.001642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:00.411 [2024-12-07 04:07:43.001652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.001704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.001716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:00.411 [2024-12-07 04:07:43.001726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:00.411 [2024-12-07 04:07:43.001739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.001771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.001784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:00.411 [2024-12-07 04:07:43.001794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:00.411 [2024-12-07 04:07:43.001803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.001855] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:00.411 [2024-12-07 04:07:43.001867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.001877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:00.411 [2024-12-07 04:07:43.001903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:00.411 [2024-12-07 04:07:43.001913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.036320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.036359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:00.411 [2024-12-07 04:07:43.036390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.441 ms 00:23:00.411 [2024-12-07 04:07:43.036400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.036509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:00.411 [2024-12-07 04:07:43.036524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:00.411 [2024-12-07 04:07:43.036535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:00.411 [2024-12-07 04:07:43.036546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:00.411 [2024-12-07 04:07:43.037587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:00.411 [2024-12-07 04:07:43.041745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 392.660 ms, result 0 00:23:00.411 [2024-12-07 04:07:43.042646] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:00.411 [2024-12-07 04:07:43.060205] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.350  [2024-12-07T04:07:45.462Z] Copying: 23/256 [MB] (23 MBps) [2024-12-07T04:07:46.412Z] Copying: 46/256 [MB] (23 MBps) [2024-12-07T04:07:47.348Z] Copying: 70/256 [MB] (23 MBps) [2024-12-07T04:07:48.285Z] Copying: 92/256 [MB] (22 MBps) [2024-12-07T04:07:49.240Z] Copying: 116/256 [MB] (23 MBps) [2024-12-07T04:07:50.207Z] Copying: 140/256 [MB] (23 MBps) [2024-12-07T04:07:51.141Z] Copying: 164/256 [MB] (24 MBps) [2024-12-07T04:07:52.076Z] Copying: 188/256 [MB] (24 MBps) [2024-12-07T04:07:53.457Z] Copying: 213/256 [MB] (24 MBps) [2024-12-07T04:07:54.026Z] Copying: 237/256 [MB] (24 MBps) [2024-12-07T04:07:54.026Z] Copying: 256/256 [MB] (average 23 MBps)[2024-12-07 04:07:53.800282] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:11.290 [2024-12-07 04:07:53.814535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.814577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.290 [2024-12-07 04:07:53.814593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:11.290 [2024-12-07 04:07:53.814611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.814633] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:11.290 [2024-12-07 04:07:53.818828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.818861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.290 [2024-12-07 04:07:53.818873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.185 ms 00:23:11.290 [2024-12-07 04:07:53.818884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.820739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.820777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.290 [2024-12-07 04:07:53.820789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.832 ms 00:23:11.290 [2024-12-07 04:07:53.820800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.827959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.828000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.290 [2024-12-07 04:07:53.828012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.151 ms 00:23:11.290 [2024-12-07 04:07:53.828021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.833344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.833377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.290 [2024-12-07 04:07:53.833388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.297 ms 00:23:11.290 [2024-12-07 04:07:53.833398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.867450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.867487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.290 [2024-12-07 04:07:53.867501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.062 ms 00:23:11.290 [2024-12-07 04:07:53.867510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.887640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.887688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.290 [2024-12-07 04:07:53.887722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.110 ms 00:23:11.290 [2024-12-07 04:07:53.887732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.887877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.290 [2024-12-07 04:07:53.887892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.290 [2024-12-07 04:07:53.887903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:11.290 [2024-12-07 04:07:53.887923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.290 [2024-12-07 04:07:53.923481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.291 [2024-12-07 04:07:53.923517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:11.291 [2024-12-07 04:07:53.923545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.575 ms 00:23:11.291 [2024-12-07 04:07:53.923565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.291 [2024-12-07 04:07:53.958072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.291 [2024-12-07 04:07:53.958107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.291 [2024-12-07 04:07:53.958119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.512 ms 00:23:11.291 [2024-12-07 04:07:53.958129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.291 [2024-12-07 04:07:53.992310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.291 [2024-12-07 04:07:53.992344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.291 [2024-12-07 04:07:53.992373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.187 ms 00:23:11.291 [2024-12-07 04:07:53.992383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.552 [2024-12-07 04:07:54.026473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.552 [2024-12-07 04:07:54.026508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.552 [2024-12-07 04:07:54.026521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.050 ms 00:23:11.552 [2024-12-07 04:07:54.026530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.552 [2024-12-07 04:07:54.026582] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.552 [2024-12-07 04:07:54.026599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.026997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.552 [2024-12-07 04:07:54.027189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.553 [2024-12-07 04:07:54.027697] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.553 [2024-12-07 04:07:54.027706] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:23:11.553 [2024-12-07 04:07:54.027717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:11.553 [2024-12-07 04:07:54.027726] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:11.553 [2024-12-07 04:07:54.027737] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:11.553 [2024-12-07 04:07:54.027747] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:11.553 [2024-12-07 04:07:54.027756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.553 [2024-12-07 04:07:54.027766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.553 [2024-12-07 04:07:54.027779] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.553 [2024-12-07 04:07:54.027788] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.553 [2024-12-07 04:07:54.027797] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.553 [2024-12-07 04:07:54.027806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.553 [2024-12-07 04:07:54.027816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.553 [2024-12-07 04:07:54.027827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.228 ms 00:23:11.553 [2024-12-07 04:07:54.027837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.046965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.553 [2024-12-07 04:07:54.046999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.553 [2024-12-07 04:07:54.047027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.140 ms 00:23:11.553 [2024-12-07 04:07:54.047037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.047623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.553 [2024-12-07 04:07:54.047647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.553 [2024-12-07 04:07:54.047658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:23:11.553 [2024-12-07 04:07:54.047668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.097592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.553 [2024-12-07 04:07:54.097628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.553 [2024-12-07 04:07:54.097641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.553 [2024-12-07 04:07:54.097657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.097738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.553 [2024-12-07 04:07:54.097749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.553 [2024-12-07 04:07:54.097760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.553 [2024-12-07 04:07:54.097769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.097813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.553 [2024-12-07 04:07:54.097825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.553 [2024-12-07 04:07:54.097835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.553 [2024-12-07 04:07:54.097844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.097866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.553 [2024-12-07 04:07:54.097876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.553 [2024-12-07 04:07:54.097886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.553 [2024-12-07 04:07:54.097895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.553 [2024-12-07 04:07:54.213879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.553 [2024-12-07 04:07:54.213935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.553 [2024-12-07 04:07:54.213950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.553 [2024-12-07 04:07:54.213960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.813 [2024-12-07 04:07:54.309426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.309437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.813 [2024-12-07 04:07:54.309517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.309527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.813 [2024-12-07 04:07:54.309580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.309590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.813 [2024-12-07 04:07:54.309716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.309726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.813 [2024-12-07 04:07:54.309810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.309836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.813 [2024-12-07 04:07:54.309896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.309907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.309950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.813 [2024-12-07 04:07:54.309984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.813 [2024-12-07 04:07:54.309996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.813 [2024-12-07 04:07:54.310006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.813 [2024-12-07 04:07:54.310144] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 496.403 ms, result 0 00:23:13.191 00:23:13.191 00:23:13.191 04:07:55 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78480 00:23:13.191 04:07:55 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:13.191 04:07:55 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78480 00:23:13.191 04:07:55 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78480 ']' 00:23:13.191 04:07:55 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.192 04:07:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.192 04:07:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.192 04:07:55 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.192 04:07:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:13.192 [2024-12-07 04:07:55.633927] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:23:13.192 [2024-12-07 04:07:55.634503] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78480 ] 00:23:13.192 [2024-12-07 04:07:55.814673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.192 [2024-12-07 04:07:55.922273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.129 04:07:56 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.129 04:07:56 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:14.129 04:07:56 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:14.388 [2024-12-07 04:07:56.966335] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.388 [2024-12-07 04:07:56.966394] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:14.649 [2024-12-07 04:07:57.147087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.649 [2024-12-07 04:07:57.147136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:14.649 [2024-12-07 04:07:57.147155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:14.649 [2024-12-07 04:07:57.147166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.649 [2024-12-07 04:07:57.150357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.649 [2024-12-07 04:07:57.150394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:14.649 [2024-12-07 04:07:57.150408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:23:14.649 [2024-12-07 04:07:57.150419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.649 [2024-12-07 04:07:57.150514] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:14.649 [2024-12-07 04:07:57.151532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:14.650 [2024-12-07 04:07:57.151571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.151582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:14.650 [2024-12-07 04:07:57.151596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.069 ms 00:23:14.650 [2024-12-07 04:07:57.151607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.153086] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:14.650 [2024-12-07 04:07:57.170798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.170841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:14.650 [2024-12-07 04:07:57.170855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.745 ms 00:23:14.650 [2024-12-07 04:07:57.170869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.170986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.171004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:14.650 [2024-12-07 04:07:57.171016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:23:14.650 [2024-12-07 04:07:57.171029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.177660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.177698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:14.650 [2024-12-07 04:07:57.177710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.596 ms 00:23:14.650 [2024-12-07 04:07:57.177723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.177824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.177841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:14.650 [2024-12-07 04:07:57.177852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:14.650 [2024-12-07 04:07:57.177870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.177897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.177911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:14.650 [2024-12-07 04:07:57.177921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:14.650 [2024-12-07 04:07:57.177944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.177968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:14.650 [2024-12-07 04:07:57.182633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.182665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:14.650 [2024-12-07 04:07:57.182679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.675 ms 00:23:14.650 [2024-12-07 04:07:57.182689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.182760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.182773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:14.650 [2024-12-07 04:07:57.182787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:14.650 [2024-12-07 04:07:57.182801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.182825] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:14.650 [2024-12-07 04:07:57.182848] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:14.650 [2024-12-07 04:07:57.182895] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:14.650 [2024-12-07 04:07:57.182915] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:14.650 [2024-12-07 04:07:57.183016] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:14.650 [2024-12-07 04:07:57.183031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:14.650 [2024-12-07 04:07:57.183049] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:14.650 [2024-12-07 04:07:57.183062] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183077] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183087] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:14.650 [2024-12-07 04:07:57.183100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:14.650 [2024-12-07 04:07:57.183110] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:14.650 [2024-12-07 04:07:57.183125] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:14.650 [2024-12-07 04:07:57.183137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.183151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:14.650 [2024-12-07 04:07:57.183161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.318 ms 00:23:14.650 [2024-12-07 04:07:57.183173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.183249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.650 [2024-12-07 04:07:57.183264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:14.650 [2024-12-07 04:07:57.183275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:14.650 [2024-12-07 04:07:57.183287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.650 [2024-12-07 04:07:57.183379] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:14.650 [2024-12-07 04:07:57.183394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:14.650 [2024-12-07 04:07:57.183405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183418] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:14.650 [2024-12-07 04:07:57.183441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:14.650 [2024-12-07 04:07:57.183474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.650 [2024-12-07 04:07:57.183495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:14.650 [2024-12-07 04:07:57.183507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:14.650 [2024-12-07 04:07:57.183516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:14.650 [2024-12-07 04:07:57.183527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:14.650 [2024-12-07 04:07:57.183537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:14.650 [2024-12-07 04:07:57.183549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:14.650 [2024-12-07 04:07:57.183569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183608] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:14.650 [2024-12-07 04:07:57.183617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:14.650 [2024-12-07 04:07:57.183657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183679] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:14.650 [2024-12-07 04:07:57.183689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:14.650 [2024-12-07 04:07:57.183727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:14.650 [2024-12-07 04:07:57.183759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.650 [2024-12-07 04:07:57.183782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:14.650 [2024-12-07 04:07:57.183795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:14.650 [2024-12-07 04:07:57.183805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:14.650 [2024-12-07 04:07:57.183824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:14.650 [2024-12-07 04:07:57.183833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:14.650 [2024-12-07 04:07:57.183850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:14.650 [2024-12-07 04:07:57.183873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:14.650 [2024-12-07 04:07:57.183886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183900] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:14.650 [2024-12-07 04:07:57.183913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:14.650 [2024-12-07 04:07:57.183925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:14.650 [2024-12-07 04:07:57.183945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:14.650 [2024-12-07 04:07:57.183972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:14.650 [2024-12-07 04:07:57.183983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:14.651 [2024-12-07 04:07:57.183996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:14.651 [2024-12-07 04:07:57.184005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:14.651 [2024-12-07 04:07:57.184016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:14.651 [2024-12-07 04:07:57.184026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:14.651 [2024-12-07 04:07:57.184040] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:14.651 [2024-12-07 04:07:57.184053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:14.651 [2024-12-07 04:07:57.184081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:14.651 [2024-12-07 04:07:57.184093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:14.651 [2024-12-07 04:07:57.184103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:14.651 [2024-12-07 04:07:57.184116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:14.651 [2024-12-07 04:07:57.184128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:14.651 [2024-12-07 04:07:57.184140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:14.651 [2024-12-07 04:07:57.184150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:14.651 [2024-12-07 04:07:57.184164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:14.651 [2024-12-07 04:07:57.184174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:14.651 [2024-12-07 04:07:57.184231] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:14.651 [2024-12-07 04:07:57.184242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:14.651 [2024-12-07 04:07:57.184271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:14.651 [2024-12-07 04:07:57.184284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:14.651 [2024-12-07 04:07:57.184295] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:14.651 [2024-12-07 04:07:57.184308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.184319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:14.651 [2024-12-07 04:07:57.184332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.988 ms 00:23:14.651 [2024-12-07 04:07:57.184346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.223659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.223694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:14.651 [2024-12-07 04:07:57.223711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.313 ms 00:23:14.651 [2024-12-07 04:07:57.223726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.223841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.223854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:14.651 [2024-12-07 04:07:57.223869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:14.651 [2024-12-07 04:07:57.223879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.268613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.268650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:14.651 [2024-12-07 04:07:57.268666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.780 ms 00:23:14.651 [2024-12-07 04:07:57.268676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.268756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.268768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:14.651 [2024-12-07 04:07:57.268782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:14.651 [2024-12-07 04:07:57.268793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.269230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.269252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:14.651 [2024-12-07 04:07:57.269266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:23:14.651 [2024-12-07 04:07:57.269276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.269389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.269402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:14.651 [2024-12-07 04:07:57.269414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:23:14.651 [2024-12-07 04:07:57.269441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.290605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.290638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:14.651 [2024-12-07 04:07:57.290656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.169 ms 00:23:14.651 [2024-12-07 04:07:57.290667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.318968] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:14.651 [2024-12-07 04:07:57.319005] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:14.651 [2024-12-07 04:07:57.319025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.319036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:14.651 [2024-12-07 04:07:57.319052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.289 ms 00:23:14.651 [2024-12-07 04:07:57.319073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.347312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.347350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:14.651 [2024-12-07 04:07:57.347366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.205 ms 00:23:14.651 [2024-12-07 04:07:57.347375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.365499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.365532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:14.651 [2024-12-07 04:07:57.365550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.072 ms 00:23:14.651 [2024-12-07 04:07:57.365559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.651 [2024-12-07 04:07:57.383346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.651 [2024-12-07 04:07:57.383381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:14.651 [2024-12-07 04:07:57.383397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.741 ms 00:23:14.651 [2024-12-07 04:07:57.383407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.911 [2024-12-07 04:07:57.384180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.911 [2024-12-07 04:07:57.384213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:14.911 [2024-12-07 04:07:57.384231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:23:14.911 [2024-12-07 04:07:57.384242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.468334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.468393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:14.912 [2024-12-07 04:07:57.468414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.196 ms 00:23:14.912 [2024-12-07 04:07:57.468425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.478969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:14.912 [2024-12-07 04:07:57.494989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.495042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:14.912 [2024-12-07 04:07:57.495061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.453 ms 00:23:14.912 [2024-12-07 04:07:57.495075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.495195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.495213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:14.912 [2024-12-07 04:07:57.495225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:14.912 [2024-12-07 04:07:57.495238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.495295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.495309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:14.912 [2024-12-07 04:07:57.495320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:14.912 [2024-12-07 04:07:57.495338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.495361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.495378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:14.912 [2024-12-07 04:07:57.495388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:14.912 [2024-12-07 04:07:57.495401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.495436] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:14.912 [2024-12-07 04:07:57.495454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.495470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:14.912 [2024-12-07 04:07:57.495484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:14.912 [2024-12-07 04:07:57.495494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.530602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.530642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:14.912 [2024-12-07 04:07:57.530659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.133 ms 00:23:14.912 [2024-12-07 04:07:57.530669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.530774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:14.912 [2024-12-07 04:07:57.530788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:14.912 [2024-12-07 04:07:57.530801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:14.912 [2024-12-07 04:07:57.530815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:14.912 [2024-12-07 04:07:57.531780] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:14.912 [2024-12-07 04:07:57.535813] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 385.034 ms, result 0 00:23:14.912 [2024-12-07 04:07:57.537066] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:14.912 Some configs were skipped because the RPC state that can call them passed over. 00:23:14.912 04:07:57 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:15.171 [2024-12-07 04:07:57.775787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.171 [2024-12-07 04:07:57.775844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:15.171 [2024-12-07 04:07:57.775861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:23:15.171 [2024-12-07 04:07:57.775875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.171 [2024-12-07 04:07:57.775910] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.789 ms, result 0 00:23:15.171 true 00:23:15.171 04:07:57 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:15.431 [2024-12-07 04:07:57.983296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:15.431 [2024-12-07 04:07:57.983341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:15.431 [2024-12-07 04:07:57.983359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.187 ms 00:23:15.431 [2024-12-07 04:07:57.983370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:15.431 [2024-12-07 04:07:57.983410] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.305 ms, result 0 00:23:15.431 true 00:23:15.431 04:07:57 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78480 00:23:15.431 04:07:57 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78480 ']' 00:23:15.431 04:07:57 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78480 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78480 00:23:15.431 killing process with pid 78480 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78480' 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78480 00:23:15.431 04:07:58 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78480 00:23:16.370 [2024-12-07 04:07:59.102388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.370 [2024-12-07 04:07:59.102442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:16.370 [2024-12-07 04:07:59.102458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:16.370 [2024-12-07 04:07:59.102472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.370 [2024-12-07 04:07:59.102497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:16.632 [2024-12-07 04:07:59.106757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.106791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:16.632 [2024-12-07 04:07:59.106809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.244 ms 00:23:16.632 [2024-12-07 04:07:59.106819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.107105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.107121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:16.632 [2024-12-07 04:07:59.107134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:23:16.632 [2024-12-07 04:07:59.107145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.110544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.110583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:16.632 [2024-12-07 04:07:59.110600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.380 ms 00:23:16.632 [2024-12-07 04:07:59.110611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.116027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.116062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:16.632 [2024-12-07 04:07:59.116078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:23:16.632 [2024-12-07 04:07:59.116089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.130004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.130047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:16.632 [2024-12-07 04:07:59.130064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.882 ms 00:23:16.632 [2024-12-07 04:07:59.130074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.140183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.140222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:16.632 [2024-12-07 04:07:59.140236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.067 ms 00:23:16.632 [2024-12-07 04:07:59.140246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.140387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.140401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:16.632 [2024-12-07 04:07:59.140414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:16.632 [2024-12-07 04:07:59.140424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.155122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.155160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:16.632 [2024-12-07 04:07:59.155175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.699 ms 00:23:16.632 [2024-12-07 04:07:59.155186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.170145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.170181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:16.632 [2024-12-07 04:07:59.170204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.929 ms 00:23:16.632 [2024-12-07 04:07:59.170223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.185005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.185037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:16.632 [2024-12-07 04:07:59.185052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.733 ms 00:23:16.632 [2024-12-07 04:07:59.185061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.198916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.632 [2024-12-07 04:07:59.198954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:16.632 [2024-12-07 04:07:59.198968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.804 ms 00:23:16.632 [2024-12-07 04:07:59.198977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.632 [2024-12-07 04:07:59.199031] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:16.632 [2024-12-07 04:07:59.199048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:16.632 [2024-12-07 04:07:59.199313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.199992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:16.633 [2024-12-07 04:07:59.200255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:16.633 [2024-12-07 04:07:59.200273] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:23:16.633 [2024-12-07 04:07:59.200286] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:16.633 [2024-12-07 04:07:59.200298] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:16.633 [2024-12-07 04:07:59.200307] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:16.633 [2024-12-07 04:07:59.200319] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:16.633 [2024-12-07 04:07:59.200330] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:16.633 [2024-12-07 04:07:59.200343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:16.633 [2024-12-07 04:07:59.200353] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:16.633 [2024-12-07 04:07:59.200365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:16.633 [2024-12-07 04:07:59.200373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:16.633 [2024-12-07 04:07:59.200385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.633 [2024-12-07 04:07:59.200395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:16.634 [2024-12-07 04:07:59.200407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.357 ms 00:23:16.634 [2024-12-07 04:07:59.200416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.634 [2024-12-07 04:07:59.218489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.634 [2024-12-07 04:07:59.218524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:16.634 [2024-12-07 04:07:59.218542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.065 ms 00:23:16.634 [2024-12-07 04:07:59.218553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.634 [2024-12-07 04:07:59.219174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:16.634 [2024-12-07 04:07:59.219189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:16.634 [2024-12-07 04:07:59.219206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.570 ms 00:23:16.634 [2024-12-07 04:07:59.219216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.634 [2024-12-07 04:07:59.282917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.634 [2024-12-07 04:07:59.283118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:16.634 [2024-12-07 04:07:59.283145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.634 [2024-12-07 04:07:59.283157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.634 [2024-12-07 04:07:59.283245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.634 [2024-12-07 04:07:59.283259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:16.634 [2024-12-07 04:07:59.283277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.634 [2024-12-07 04:07:59.283287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.634 [2024-12-07 04:07:59.283345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.634 [2024-12-07 04:07:59.283359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:16.634 [2024-12-07 04:07:59.283375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.634 [2024-12-07 04:07:59.283385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.634 [2024-12-07 04:07:59.283408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.634 [2024-12-07 04:07:59.283419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:16.634 [2024-12-07 04:07:59.283432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.634 [2024-12-07 04:07:59.283445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.399662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.399717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:16.894 [2024-12-07 04:07:59.399733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.399743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:16.894 [2024-12-07 04:07:59.493339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:16.894 [2024-12-07 04:07:59.493454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:16.894 [2024-12-07 04:07:59.493518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:16.894 [2024-12-07 04:07:59.493653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:16.894 [2024-12-07 04:07:59.493726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:16.894 [2024-12-07 04:07:59.493805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.493860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:16.894 [2024-12-07 04:07:59.493872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:16.894 [2024-12-07 04:07:59.493885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:16.894 [2024-12-07 04:07:59.493895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:16.894 [2024-12-07 04:07:59.494068] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.281 ms, result 0 00:23:17.832 04:08:00 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:17.832 04:08:00 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:17.832 [2024-12-07 04:08:00.561586] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:23:17.832 [2024-12-07 04:08:00.561700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78544 ] 00:23:18.091 [2024-12-07 04:08:00.742839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.349 [2024-12-07 04:08:00.849477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:18.606 [2024-12-07 04:08:01.196921] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.606 [2024-12-07 04:08:01.197004] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:18.867 [2024-12-07 04:08:01.357606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.357835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:18.867 [2024-12-07 04:08:01.357876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.867 [2024-12-07 04:08:01.357888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.361148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.361197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.867 [2024-12-07 04:08:01.361211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.236 ms 00:23:18.867 [2024-12-07 04:08:01.361222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.361332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:18.867 [2024-12-07 04:08:01.362439] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:18.867 [2024-12-07 04:08:01.362476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.362487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.867 [2024-12-07 04:08:01.362499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.153 ms 00:23:18.867 [2024-12-07 04:08:01.362510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.364025] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:18.867 [2024-12-07 04:08:01.382723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.382758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:18.867 [2024-12-07 04:08:01.382771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:23:18.867 [2024-12-07 04:08:01.382781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.382874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.382887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:18.867 [2024-12-07 04:08:01.382898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:18.867 [2024-12-07 04:08:01.382908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.389619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.389646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.867 [2024-12-07 04:08:01.389657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:23:18.867 [2024-12-07 04:08:01.389667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.389758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.389773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.867 [2024-12-07 04:08:01.389783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:23:18.867 [2024-12-07 04:08:01.389793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.389822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.389833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:18.867 [2024-12-07 04:08:01.389843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:18.867 [2024-12-07 04:08:01.389852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.389873] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:18.867 [2024-12-07 04:08:01.394672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.394791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.867 [2024-12-07 04:08:01.394958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.811 ms 00:23:18.867 [2024-12-07 04:08:01.394997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.395095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.395191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:18.867 [2024-12-07 04:08:01.395228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:18.867 [2024-12-07 04:08:01.395258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.395355] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:18.867 [2024-12-07 04:08:01.395407] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:18.867 [2024-12-07 04:08:01.395530] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:18.867 [2024-12-07 04:08:01.395590] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:18.867 [2024-12-07 04:08:01.395758] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:18.867 [2024-12-07 04:08:01.395983] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:18.867 [2024-12-07 04:08:01.396037] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:18.867 [2024-12-07 04:08:01.396097] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:18.867 [2024-12-07 04:08:01.396146] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:18.867 [2024-12-07 04:08:01.396254] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:18.867 [2024-12-07 04:08:01.396288] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:18.867 [2024-12-07 04:08:01.396318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:18.867 [2024-12-07 04:08:01.396348] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:18.867 [2024-12-07 04:08:01.396378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.396408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:18.867 [2024-12-07 04:08:01.396525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.028 ms 00:23:18.867 [2024-12-07 04:08:01.396555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.396641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.867 [2024-12-07 04:08:01.396657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:18.867 [2024-12-07 04:08:01.396668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:18.867 [2024-12-07 04:08:01.396678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.867 [2024-12-07 04:08:01.396769] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:18.867 [2024-12-07 04:08:01.396783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:18.867 [2024-12-07 04:08:01.396793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.867 [2024-12-07 04:08:01.396803] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.867 [2024-12-07 04:08:01.396813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:18.867 [2024-12-07 04:08:01.396823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:18.867 [2024-12-07 04:08:01.396832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:18.867 [2024-12-07 04:08:01.396841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:18.867 [2024-12-07 04:08:01.396850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:18.867 [2024-12-07 04:08:01.396859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.867 [2024-12-07 04:08:01.396868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:18.867 [2024-12-07 04:08:01.396888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:18.867 [2024-12-07 04:08:01.396897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:18.867 [2024-12-07 04:08:01.396906] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:18.867 [2024-12-07 04:08:01.396915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:18.867 [2024-12-07 04:08:01.396924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.867 [2024-12-07 04:08:01.396944] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:18.867 [2024-12-07 04:08:01.396954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:18.867 [2024-12-07 04:08:01.396963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.867 [2024-12-07 04:08:01.396973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:18.867 [2024-12-07 04:08:01.396982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:18.867 [2024-12-07 04:08:01.396991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.867 [2024-12-07 04:08:01.397000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:18.867 [2024-12-07 04:08:01.397010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.868 [2024-12-07 04:08:01.397029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:18.868 [2024-12-07 04:08:01.397039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.868 [2024-12-07 04:08:01.397057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:18.868 [2024-12-07 04:08:01.397067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:18.868 [2024-12-07 04:08:01.397085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:18.868 [2024-12-07 04:08:01.397094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.868 [2024-12-07 04:08:01.397114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:18.868 [2024-12-07 04:08:01.397123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:18.868 [2024-12-07 04:08:01.397132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:18.868 [2024-12-07 04:08:01.397141] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:18.868 [2024-12-07 04:08:01.397151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:18.868 [2024-12-07 04:08:01.397160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:18.868 [2024-12-07 04:08:01.397178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:18.868 [2024-12-07 04:08:01.397194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397203] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:18.868 [2024-12-07 04:08:01.397213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:18.868 [2024-12-07 04:08:01.397226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:18.868 [2024-12-07 04:08:01.397236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:18.868 [2024-12-07 04:08:01.397247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:18.868 [2024-12-07 04:08:01.397257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:18.868 [2024-12-07 04:08:01.397266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:18.868 [2024-12-07 04:08:01.397275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:18.868 [2024-12-07 04:08:01.397284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:18.868 [2024-12-07 04:08:01.397293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:18.868 [2024-12-07 04:08:01.397304] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:18.868 [2024-12-07 04:08:01.397317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:18.868 [2024-12-07 04:08:01.397340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:18.868 [2024-12-07 04:08:01.397351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:18.868 [2024-12-07 04:08:01.397362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:18.868 [2024-12-07 04:08:01.397373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:18.868 [2024-12-07 04:08:01.397383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:18.868 [2024-12-07 04:08:01.397394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:18.868 [2024-12-07 04:08:01.397404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:18.868 [2024-12-07 04:08:01.397415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:18.868 [2024-12-07 04:08:01.397425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:18.868 [2024-12-07 04:08:01.397476] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:18.868 [2024-12-07 04:08:01.397487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:18.868 [2024-12-07 04:08:01.397508] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:18.868 [2024-12-07 04:08:01.397518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:18.868 [2024-12-07 04:08:01.397528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:18.868 [2024-12-07 04:08:01.397539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.397554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:18.868 [2024-12-07 04:08:01.397565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.826 ms 00:23:18.868 [2024-12-07 04:08:01.397575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.437761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.437797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.868 [2024-12-07 04:08:01.437810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.193 ms 00:23:18.868 [2024-12-07 04:08:01.437822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.437951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.437980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:18.868 [2024-12-07 04:08:01.437991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:18.868 [2024-12-07 04:08:01.438002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.508871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.508907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.868 [2024-12-07 04:08:01.508924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.961 ms 00:23:18.868 [2024-12-07 04:08:01.508943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.509042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.509054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.868 [2024-12-07 04:08:01.509065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:18.868 [2024-12-07 04:08:01.509075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.509523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.509542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.868 [2024-12-07 04:08:01.509558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:23:18.868 [2024-12-07 04:08:01.509569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.509685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.509698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.868 [2024-12-07 04:08:01.509709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:18.868 [2024-12-07 04:08:01.509719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.528611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.528735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.868 [2024-12-07 04:08:01.528823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.901 ms 00:23:18.868 [2024-12-07 04:08:01.528860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.546737] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:18.868 [2024-12-07 04:08:01.546895] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:18.868 [2024-12-07 04:08:01.547014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.547029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:18.868 [2024-12-07 04:08:01.547040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.047 ms 00:23:18.868 [2024-12-07 04:08:01.547051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.575351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.575399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:18.868 [2024-12-07 04:08:01.575413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.247 ms 00:23:18.868 [2024-12-07 04:08:01.575423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.868 [2024-12-07 04:08:01.592775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.868 [2024-12-07 04:08:01.592809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:18.868 [2024-12-07 04:08:01.592821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.306 ms 00:23:18.868 [2024-12-07 04:08:01.592831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.610692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.610726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:19.128 [2024-12-07 04:08:01.610738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.820 ms 00:23:19.128 [2024-12-07 04:08:01.610747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.611553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.611582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:19.128 [2024-12-07 04:08:01.611595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.697 ms 00:23:19.128 [2024-12-07 04:08:01.611604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.692938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.693181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:19.128 [2024-12-07 04:08:01.693221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.437 ms 00:23:19.128 [2024-12-07 04:08:01.693232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.703462] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:19.128 [2024-12-07 04:08:01.719174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.719217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:19.128 [2024-12-07 04:08:01.719232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.825 ms 00:23:19.128 [2024-12-07 04:08:01.719247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.719353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.719366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:19.128 [2024-12-07 04:08:01.719377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:19.128 [2024-12-07 04:08:01.719387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.719443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.719455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:19.128 [2024-12-07 04:08:01.719464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:19.128 [2024-12-07 04:08:01.719479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.719510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.719523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:19.128 [2024-12-07 04:08:01.719534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:19.128 [2024-12-07 04:08:01.719543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.719578] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:19.128 [2024-12-07 04:08:01.719590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.719600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:19.128 [2024-12-07 04:08:01.719609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:19.128 [2024-12-07 04:08:01.719618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.753722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.753759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:19.128 [2024-12-07 04:08:01.753772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.138 ms 00:23:19.128 [2024-12-07 04:08:01.753798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.753902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:19.128 [2024-12-07 04:08:01.753916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:19.128 [2024-12-07 04:08:01.753953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:19.128 [2024-12-07 04:08:01.753964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:19.128 [2024-12-07 04:08:01.754842] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:19.128 [2024-12-07 04:08:01.758975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 397.594 ms, result 0 00:23:19.129 [2024-12-07 04:08:01.759844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:19.129 [2024-12-07 04:08:01.777691] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:20.067  [2024-12-07T04:08:04.184Z] Copying: 27/256 [MB] (27 MBps) [2024-12-07T04:08:05.122Z] Copying: 51/256 [MB] (24 MBps) [2024-12-07T04:08:06.057Z] Copying: 76/256 [MB] (24 MBps) [2024-12-07T04:08:06.993Z] Copying: 101/256 [MB] (24 MBps) [2024-12-07T04:08:07.929Z] Copying: 126/256 [MB] (25 MBps) [2024-12-07T04:08:08.867Z] Copying: 149/256 [MB] (23 MBps) [2024-12-07T04:08:09.803Z] Copying: 173/256 [MB] (24 MBps) [2024-12-07T04:08:11.180Z] Copying: 198/256 [MB] (24 MBps) [2024-12-07T04:08:12.114Z] Copying: 223/256 [MB] (25 MBps) [2024-12-07T04:08:12.114Z] Copying: 248/256 [MB] (25 MBps) [2024-12-07T04:08:12.114Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-07 04:08:12.054996] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:29.378 [2024-12-07 04:08:12.069159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.378 [2024-12-07 04:08:12.069196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:29.378 [2024-12-07 04:08:12.069236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:29.378 [2024-12-07 04:08:12.069246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.378 [2024-12-07 04:08:12.069268] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:29.378 [2024-12-07 04:08:12.073311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.378 [2024-12-07 04:08:12.073350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:29.378 [2024-12-07 04:08:12.073361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.033 ms 00:23:29.378 [2024-12-07 04:08:12.073387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.378 [2024-12-07 04:08:12.073607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.378 [2024-12-07 04:08:12.073619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:29.378 [2024-12-07 04:08:12.073629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:23:29.378 [2024-12-07 04:08:12.073638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.378 [2024-12-07 04:08:12.076530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.378 [2024-12-07 04:08:12.076649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:29.378 [2024-12-07 04:08:12.076668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.873 ms 00:23:29.378 [2024-12-07 04:08:12.076693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.378 [2024-12-07 04:08:12.082197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.378 [2024-12-07 04:08:12.082233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:29.378 [2024-12-07 04:08:12.082245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.488 ms 00:23:29.378 [2024-12-07 04:08:12.082255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.643 [2024-12-07 04:08:12.116818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.643 [2024-12-07 04:08:12.116852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:29.643 [2024-12-07 04:08:12.116866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.552 ms 00:23:29.643 [2024-12-07 04:08:12.116876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.643 [2024-12-07 04:08:12.137146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.643 [2024-12-07 04:08:12.137185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:29.643 [2024-12-07 04:08:12.137222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.224 ms 00:23:29.644 [2024-12-07 04:08:12.137232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.644 [2024-12-07 04:08:12.137365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.644 [2024-12-07 04:08:12.137379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:29.644 [2024-12-07 04:08:12.137403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:29.644 [2024-12-07 04:08:12.137413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.644 [2024-12-07 04:08:12.171731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.644 [2024-12-07 04:08:12.171861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:29.644 [2024-12-07 04:08:12.171880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.357 ms 00:23:29.644 [2024-12-07 04:08:12.171906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.644 [2024-12-07 04:08:12.205824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.644 [2024-12-07 04:08:12.205857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:29.644 [2024-12-07 04:08:12.205869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.887 ms 00:23:29.644 [2024-12-07 04:08:12.205878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.644 [2024-12-07 04:08:12.240613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.644 [2024-12-07 04:08:12.240644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:29.644 [2024-12-07 04:08:12.240655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.709 ms 00:23:29.644 [2024-12-07 04:08:12.240664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.644 [2024-12-07 04:08:12.274749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.644 [2024-12-07 04:08:12.274898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:29.644 [2024-12-07 04:08:12.274918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.063 ms 00:23:29.644 [2024-12-07 04:08:12.274940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.644 [2024-12-07 04:08:12.274991] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:29.644 [2024-12-07 04:08:12.275009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:29.644 [2024-12-07 04:08:12.275735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.275998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:29.645 [2024-12-07 04:08:12.276080] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:29.645 [2024-12-07 04:08:12.276090] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:23:29.645 [2024-12-07 04:08:12.276101] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:29.645 [2024-12-07 04:08:12.276111] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:29.645 [2024-12-07 04:08:12.276130] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:29.645 [2024-12-07 04:08:12.276140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:29.645 [2024-12-07 04:08:12.276149] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:29.645 [2024-12-07 04:08:12.276159] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:29.645 [2024-12-07 04:08:12.276176] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:29.645 [2024-12-07 04:08:12.276185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:29.645 [2024-12-07 04:08:12.276194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:29.645 [2024-12-07 04:08:12.276203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.645 [2024-12-07 04:08:12.276213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:29.645 [2024-12-07 04:08:12.276223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.215 ms 00:23:29.645 [2024-12-07 04:08:12.276233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.645 [2024-12-07 04:08:12.295596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.645 [2024-12-07 04:08:12.295626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:29.645 [2024-12-07 04:08:12.295638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.374 ms 00:23:29.645 [2024-12-07 04:08:12.295664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.645 [2024-12-07 04:08:12.296247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:29.645 [2024-12-07 04:08:12.296264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:29.645 [2024-12-07 04:08:12.296276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:23:29.645 [2024-12-07 04:08:12.296285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.645 [2024-12-07 04:08:12.347987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.645 [2024-12-07 04:08:12.348140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:29.645 [2024-12-07 04:08:12.348181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.645 [2024-12-07 04:08:12.348200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.645 [2024-12-07 04:08:12.348291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.645 [2024-12-07 04:08:12.348303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:29.645 [2024-12-07 04:08:12.348314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.645 [2024-12-07 04:08:12.348325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.645 [2024-12-07 04:08:12.348378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.645 [2024-12-07 04:08:12.348391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:29.645 [2024-12-07 04:08:12.348402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.645 [2024-12-07 04:08:12.348412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.645 [2024-12-07 04:08:12.348438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.645 [2024-12-07 04:08:12.348448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:29.645 [2024-12-07 04:08:12.348458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.645 [2024-12-07 04:08:12.348469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.904 [2024-12-07 04:08:12.468919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.904 [2024-12-07 04:08:12.468971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:29.904 [2024-12-07 04:08:12.468985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.904 [2024-12-07 04:08:12.468995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.904 [2024-12-07 04:08:12.564845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:29.905 [2024-12-07 04:08:12.565067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:29.905 [2024-12-07 04:08:12.565174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:29.905 [2024-12-07 04:08:12.565245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:29.905 [2024-12-07 04:08:12.565390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:29.905 [2024-12-07 04:08:12.565468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:29.905 [2024-12-07 04:08:12.565538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:29.905 [2024-12-07 04:08:12.565610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:29.905 [2024-12-07 04:08:12.565620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:29.905 [2024-12-07 04:08:12.565631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:29.905 [2024-12-07 04:08:12.565773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 497.407 ms, result 0 00:23:30.843 00:23:30.843 00:23:31.103 04:08:13 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:23:31.103 04:08:13 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:31.362 04:08:14 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:31.624 [2024-12-07 04:08:14.123600] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:23:31.624 [2024-12-07 04:08:14.123716] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78687 ] 00:23:31.624 [2024-12-07 04:08:14.302769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.885 [2024-12-07 04:08:14.410178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.145 [2024-12-07 04:08:14.765039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.145 [2024-12-07 04:08:14.765110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:32.406 [2024-12-07 04:08:14.926000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.926171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:32.406 [2024-12-07 04:08:14.926363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:32.406 [2024-12-07 04:08:14.926403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.929650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.929785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.406 [2024-12-07 04:08:14.929957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.203 ms 00:23:32.406 [2024-12-07 04:08:14.929995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.930167] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:32.406 [2024-12-07 04:08:14.931298] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:32.406 [2024-12-07 04:08:14.931333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.931345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.406 [2024-12-07 04:08:14.931356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:23:32.406 [2024-12-07 04:08:14.931366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.932834] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:32.406 [2024-12-07 04:08:14.952657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.952693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:32.406 [2024-12-07 04:08:14.952707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.856 ms 00:23:32.406 [2024-12-07 04:08:14.952733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.952833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.952847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:32.406 [2024-12-07 04:08:14.952858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:32.406 [2024-12-07 04:08:14.952868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.959620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.959647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.406 [2024-12-07 04:08:14.959658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.720 ms 00:23:32.406 [2024-12-07 04:08:14.959684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.959782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.959797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.406 [2024-12-07 04:08:14.959808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:32.406 [2024-12-07 04:08:14.959828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.959860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.959871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:32.406 [2024-12-07 04:08:14.959881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:32.406 [2024-12-07 04:08:14.959891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.959913] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:32.406 [2024-12-07 04:08:14.964705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.964734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.406 [2024-12-07 04:08:14.964745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.805 ms 00:23:32.406 [2024-12-07 04:08:14.964754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.964838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.964850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:32.406 [2024-12-07 04:08:14.964860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:32.406 [2024-12-07 04:08:14.964870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.964894] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:32.406 [2024-12-07 04:08:14.964916] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:32.406 [2024-12-07 04:08:14.964963] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:32.406 [2024-12-07 04:08:14.964982] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:32.406 [2024-12-07 04:08:14.965077] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:32.406 [2024-12-07 04:08:14.965089] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:32.406 [2024-12-07 04:08:14.965102] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:32.406 [2024-12-07 04:08:14.965135] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:32.406 [2024-12-07 04:08:14.965147] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:32.406 [2024-12-07 04:08:14.965165] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:32.406 [2024-12-07 04:08:14.965175] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:32.406 [2024-12-07 04:08:14.965185] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:32.406 [2024-12-07 04:08:14.965195] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:32.406 [2024-12-07 04:08:14.965205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.965215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:32.406 [2024-12-07 04:08:14.965226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:23:32.406 [2024-12-07 04:08:14.965236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.965312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.406 [2024-12-07 04:08:14.965327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:32.406 [2024-12-07 04:08:14.965337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:32.406 [2024-12-07 04:08:14.965347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.406 [2024-12-07 04:08:14.965437] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:32.406 [2024-12-07 04:08:14.965450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:32.407 [2024-12-07 04:08:14.965461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:32.407 [2024-12-07 04:08:14.965491] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:32.407 [2024-12-07 04:08:14.965518] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.407 [2024-12-07 04:08:14.965538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:32.407 [2024-12-07 04:08:14.965556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:32.407 [2024-12-07 04:08:14.965566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:32.407 [2024-12-07 04:08:14.965576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:32.407 [2024-12-07 04:08:14.965586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:32.407 [2024-12-07 04:08:14.965595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:32.407 [2024-12-07 04:08:14.965614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:32.407 [2024-12-07 04:08:14.965642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:32.407 [2024-12-07 04:08:14.965670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:32.407 [2024-12-07 04:08:14.965697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:32.407 [2024-12-07 04:08:14.965724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:32.407 [2024-12-07 04:08:14.965751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.407 [2024-12-07 04:08:14.965768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:32.407 [2024-12-07 04:08:14.965777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:32.407 [2024-12-07 04:08:14.965786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:32.407 [2024-12-07 04:08:14.965795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:32.407 [2024-12-07 04:08:14.965804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:32.407 [2024-12-07 04:08:14.965813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:32.407 [2024-12-07 04:08:14.965831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:32.407 [2024-12-07 04:08:14.965841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965851] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:32.407 [2024-12-07 04:08:14.965861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:32.407 [2024-12-07 04:08:14.965874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:32.407 [2024-12-07 04:08:14.965894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:32.407 [2024-12-07 04:08:14.965904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:32.407 [2024-12-07 04:08:14.965914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:32.407 [2024-12-07 04:08:14.965923] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:32.407 [2024-12-07 04:08:14.965932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:32.407 [2024-12-07 04:08:14.965952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:32.407 [2024-12-07 04:08:14.965964] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:32.407 [2024-12-07 04:08:14.965976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.965987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:32.407 [2024-12-07 04:08:14.965998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:32.407 [2024-12-07 04:08:14.966008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:32.407 [2024-12-07 04:08:14.966019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:32.407 [2024-12-07 04:08:14.966029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:32.407 [2024-12-07 04:08:14.966039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:32.407 [2024-12-07 04:08:14.966050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:32.407 [2024-12-07 04:08:14.966061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:32.407 [2024-12-07 04:08:14.966071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:32.407 [2024-12-07 04:08:14.966081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.966091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.966101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.966111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.966122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:32.407 [2024-12-07 04:08:14.966132] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:32.407 [2024-12-07 04:08:14.966143] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.966154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:32.407 [2024-12-07 04:08:14.966164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:32.407 [2024-12-07 04:08:14.966174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:32.407 [2024-12-07 04:08:14.966187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:32.407 [2024-12-07 04:08:14.966198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:14.966215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:32.407 [2024-12-07 04:08:14.966233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:23:32.407 [2024-12-07 04:08:14.966242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.407 [2024-12-07 04:08:15.005040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:15.005187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.407 [2024-12-07 04:08:15.005225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.803 ms 00:23:32.407 [2024-12-07 04:08:15.005238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.407 [2024-12-07 04:08:15.005370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:15.005383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:32.407 [2024-12-07 04:08:15.005394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:32.407 [2024-12-07 04:08:15.005405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.407 [2024-12-07 04:08:15.061756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:15.061896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.407 [2024-12-07 04:08:15.061940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.420 ms 00:23:32.407 [2024-12-07 04:08:15.061964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.407 [2024-12-07 04:08:15.062061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:15.062073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.407 [2024-12-07 04:08:15.062085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:32.407 [2024-12-07 04:08:15.062095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.407 [2024-12-07 04:08:15.062543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:15.062556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.407 [2024-12-07 04:08:15.062573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:23:32.407 [2024-12-07 04:08:15.062583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.407 [2024-12-07 04:08:15.062700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.407 [2024-12-07 04:08:15.062714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.408 [2024-12-07 04:08:15.062724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:32.408 [2024-12-07 04:08:15.062734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.408 [2024-12-07 04:08:15.081757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.408 [2024-12-07 04:08:15.081789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.408 [2024-12-07 04:08:15.081803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.030 ms 00:23:32.408 [2024-12-07 04:08:15.081813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.408 [2024-12-07 04:08:15.099996] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:23:32.408 [2024-12-07 04:08:15.100126] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:32.408 [2024-12-07 04:08:15.100145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.408 [2024-12-07 04:08:15.100172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:32.408 [2024-12-07 04:08:15.100184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:23:32.408 [2024-12-07 04:08:15.100194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.408 [2024-12-07 04:08:15.128837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.408 [2024-12-07 04:08:15.128873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:32.408 [2024-12-07 04:08:15.128886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.567 ms 00:23:32.408 [2024-12-07 04:08:15.128897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.147147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.147182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:32.667 [2024-12-07 04:08:15.147195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.176 ms 00:23:32.667 [2024-12-07 04:08:15.147205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.165187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.165219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:32.667 [2024-12-07 04:08:15.165231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.936 ms 00:23:32.667 [2024-12-07 04:08:15.165239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.165990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.166012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:32.667 [2024-12-07 04:08:15.166025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:23:32.667 [2024-12-07 04:08:15.166034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.246952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.247205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:32.667 [2024-12-07 04:08:15.247233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.020 ms 00:23:32.667 [2024-12-07 04:08:15.247244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.257414] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:32.667 [2024-12-07 04:08:15.272965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.273006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:32.667 [2024-12-07 04:08:15.273021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.616 ms 00:23:32.667 [2024-12-07 04:08:15.273053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.273164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.273177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:32.667 [2024-12-07 04:08:15.273189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:32.667 [2024-12-07 04:08:15.273198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.273254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.273266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:32.667 [2024-12-07 04:08:15.273276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:32.667 [2024-12-07 04:08:15.273289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.273323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.273335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:32.667 [2024-12-07 04:08:15.273346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:32.667 [2024-12-07 04:08:15.273355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.273407] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:32.667 [2024-12-07 04:08:15.273419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.273429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:32.667 [2024-12-07 04:08:15.273440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:32.667 [2024-12-07 04:08:15.273450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.308560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.308688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:32.667 [2024-12-07 04:08:15.308708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.145 ms 00:23:32.667 [2024-12-07 04:08:15.308734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.308842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.667 [2024-12-07 04:08:15.308856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:32.667 [2024-12-07 04:08:15.308867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:32.667 [2024-12-07 04:08:15.308877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.667 [2024-12-07 04:08:15.309755] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.667 [2024-12-07 04:08:15.313970] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 384.103 ms, result 0 00:23:32.667 [2024-12-07 04:08:15.314645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.667 [2024-12-07 04:08:15.332325] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:32.926  [2024-12-07T04:08:15.662Z] Copying: 4096/4096 [kB] (average 24 MBps)[2024-12-07 04:08:15.501329] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:32.926 [2024-12-07 04:08:15.515035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.515068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:32.926 [2024-12-07 04:08:15.515086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:32.926 [2024-12-07 04:08:15.515095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.515117] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:32.926 [2024-12-07 04:08:15.519062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.519085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:32.926 [2024-12-07 04:08:15.519097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.937 ms 00:23:32.926 [2024-12-07 04:08:15.519106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.520901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.520948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:32.926 [2024-12-07 04:08:15.520961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.775 ms 00:23:32.926 [2024-12-07 04:08:15.520972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.524150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.524287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:32.926 [2024-12-07 04:08:15.524306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.148 ms 00:23:32.926 [2024-12-07 04:08:15.524317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.529785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.529813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:32.926 [2024-12-07 04:08:15.529824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:23:32.926 [2024-12-07 04:08:15.529833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.563188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.563222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:32.926 [2024-12-07 04:08:15.563235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.349 ms 00:23:32.926 [2024-12-07 04:08:15.563244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.583675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.583714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:32.926 [2024-12-07 04:08:15.583726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.410 ms 00:23:32.926 [2024-12-07 04:08:15.583736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.583874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.583887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.926 [2024-12-07 04:08:15.583907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:32.926 [2024-12-07 04:08:15.583916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.619638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.619670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:32.926 [2024-12-07 04:08:15.619682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.750 ms 00:23:32.926 [2024-12-07 04:08:15.619690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.926 [2024-12-07 04:08:15.654528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.926 [2024-12-07 04:08:15.654658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:32.926 [2024-12-07 04:08:15.654677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.844 ms 00:23:32.926 [2024-12-07 04:08:15.654687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.185 [2024-12-07 04:08:15.688975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.185 [2024-12-07 04:08:15.689007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:33.185 [2024-12-07 04:08:15.689020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.293 ms 00:23:33.186 [2024-12-07 04:08:15.689029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.186 [2024-12-07 04:08:15.722739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.186 [2024-12-07 04:08:15.722772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:33.186 [2024-12-07 04:08:15.722784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.652 ms 00:23:33.186 [2024-12-07 04:08:15.722792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.186 [2024-12-07 04:08:15.722860] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:33.186 [2024-12-07 04:08:15.722876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.722994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:33.186 [2024-12-07 04:08:15.723917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:33.186 [2024-12-07 04:08:15.723936] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:23:33.186 [2024-12-07 04:08:15.723947] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:33.186 [2024-12-07 04:08:15.723956] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:33.186 [2024-12-07 04:08:15.723965] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:33.186 [2024-12-07 04:08:15.723975] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:33.186 [2024-12-07 04:08:15.723984] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:33.186 [2024-12-07 04:08:15.723994] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:33.186 [2024-12-07 04:08:15.724008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:33.186 [2024-12-07 04:08:15.724016] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:33.186 [2024-12-07 04:08:15.724025] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:33.186 [2024-12-07 04:08:15.724034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.186 [2024-12-07 04:08:15.724044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:33.186 [2024-12-07 04:08:15.724054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.176 ms 00:23:33.186 [2024-12-07 04:08:15.724064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.186 [2024-12-07 04:08:15.742972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.186 [2024-12-07 04:08:15.743001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:33.186 [2024-12-07 04:08:15.743013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.916 ms 00:23:33.186 [2024-12-07 04:08:15.743023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.186 [2024-12-07 04:08:15.743577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:33.186 [2024-12-07 04:08:15.743598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:33.186 [2024-12-07 04:08:15.743609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:23:33.186 [2024-12-07 04:08:15.743619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.187 [2024-12-07 04:08:15.795855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.187 [2024-12-07 04:08:15.795888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:33.187 [2024-12-07 04:08:15.795900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.187 [2024-12-07 04:08:15.795930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.187 [2024-12-07 04:08:15.796025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.187 [2024-12-07 04:08:15.796037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:33.187 [2024-12-07 04:08:15.796048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.187 [2024-12-07 04:08:15.796059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.187 [2024-12-07 04:08:15.796105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.187 [2024-12-07 04:08:15.796118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:33.187 [2024-12-07 04:08:15.796128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.187 [2024-12-07 04:08:15.796138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.187 [2024-12-07 04:08:15.796160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.187 [2024-12-07 04:08:15.796170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:33.187 [2024-12-07 04:08:15.796180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.187 [2024-12-07 04:08:15.796190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.187 [2024-12-07 04:08:15.912771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.187 [2024-12-07 04:08:15.912819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:33.187 [2024-12-07 04:08:15.912832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.187 [2024-12-07 04:08:15.912848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.007475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.007522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:33.445 [2024-12-07 04:08:16.007536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.007545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.007607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.007618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:33.445 [2024-12-07 04:08:16.007628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.007638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.007664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.007679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:33.445 [2024-12-07 04:08:16.007689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.007699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.007812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.007825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:33.445 [2024-12-07 04:08:16.007834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.007844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.007878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.007890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:33.445 [2024-12-07 04:08:16.007903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.007913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.007980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.007992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:33.445 [2024-12-07 04:08:16.008002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.008012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.008056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:33.445 [2024-12-07 04:08:16.008072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:33.445 [2024-12-07 04:08:16.008081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:33.445 [2024-12-07 04:08:16.008092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:33.445 [2024-12-07 04:08:16.008228] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.980 ms, result 0 00:23:34.382 00:23:34.382 00:23:34.382 04:08:17 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:23:34.382 04:08:17 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78719 00:23:34.382 04:08:17 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78719 00:23:34.382 04:08:17 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78719 ']' 00:23:34.382 04:08:17 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.382 04:08:17 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.382 04:08:17 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.382 04:08:17 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.382 04:08:17 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:34.642 [2024-12-07 04:08:17.163352] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:23:34.642 [2024-12-07 04:08:17.163475] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78719 ] 00:23:34.642 [2024-12-07 04:08:17.344302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.902 [2024-12-07 04:08:17.452827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:35.932 04:08:18 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.932 04:08:18 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:23:35.932 04:08:18 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:23:35.932 [2024-12-07 04:08:18.496834] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:35.932 [2024-12-07 04:08:18.496901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:36.202 [2024-12-07 04:08:18.678339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.678388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:36.202 [2024-12-07 04:08:18.678410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:36.202 [2024-12-07 04:08:18.678421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.682005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.682039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:36.202 [2024-12-07 04:08:18.682053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.568 ms 00:23:36.202 [2024-12-07 04:08:18.682079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.682182] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:36.202 [2024-12-07 04:08:18.683122] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:36.202 [2024-12-07 04:08:18.683205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.683218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:36.202 [2024-12-07 04:08:18.683231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:23:36.202 [2024-12-07 04:08:18.683241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.684701] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:36.202 [2024-12-07 04:08:18.703964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.704003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:36.202 [2024-12-07 04:08:18.704018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.298 ms 00:23:36.202 [2024-12-07 04:08:18.704030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.704119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.704139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:36.202 [2024-12-07 04:08:18.704150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:36.202 [2024-12-07 04:08:18.704164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.710890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.711073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:36.202 [2024-12-07 04:08:18.711095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.685 ms 00:23:36.202 [2024-12-07 04:08:18.711112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.711255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.711275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:36.202 [2024-12-07 04:08:18.711287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:23:36.202 [2024-12-07 04:08:18.711310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.711338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.711354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:36.202 [2024-12-07 04:08:18.711364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:36.202 [2024-12-07 04:08:18.711379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.711405] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:36.202 [2024-12-07 04:08:18.716102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.716130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:36.202 [2024-12-07 04:08:18.716144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.707 ms 00:23:36.202 [2024-12-07 04:08:18.716170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.716249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.716262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:36.202 [2024-12-07 04:08:18.716276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.202 [2024-12-07 04:08:18.716292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.716319] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:36.202 [2024-12-07 04:08:18.716347] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:36.202 [2024-12-07 04:08:18.716396] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:36.202 [2024-12-07 04:08:18.716416] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:36.202 [2024-12-07 04:08:18.716507] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:36.202 [2024-12-07 04:08:18.716521] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:36.202 [2024-12-07 04:08:18.716544] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:36.202 [2024-12-07 04:08:18.716557] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:36.202 [2024-12-07 04:08:18.716573] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:36.202 [2024-12-07 04:08:18.716585] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:36.202 [2024-12-07 04:08:18.716599] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:36.202 [2024-12-07 04:08:18.716609] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:36.202 [2024-12-07 04:08:18.716628] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:36.202 [2024-12-07 04:08:18.716638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.716653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:36.202 [2024-12-07 04:08:18.716664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:23:36.202 [2024-12-07 04:08:18.716678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.716756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.202 [2024-12-07 04:08:18.716771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:36.202 [2024-12-07 04:08:18.716781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:36.202 [2024-12-07 04:08:18.716796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.202 [2024-12-07 04:08:18.716880] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:36.202 [2024-12-07 04:08:18.716897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:36.202 [2024-12-07 04:08:18.716908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.202 [2024-12-07 04:08:18.716922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.202 [2024-12-07 04:08:18.716955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:36.202 [2024-12-07 04:08:18.716971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:36.202 [2024-12-07 04:08:18.716981] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:36.202 [2024-12-07 04:08:18.717000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:36.202 [2024-12-07 04:08:18.717010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.203 [2024-12-07 04:08:18.717033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:36.203 [2024-12-07 04:08:18.717046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:36.203 [2024-12-07 04:08:18.717058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:36.203 [2024-12-07 04:08:18.717072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:36.203 [2024-12-07 04:08:18.717081] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:36.203 [2024-12-07 04:08:18.717093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:36.203 [2024-12-07 04:08:18.717114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:36.203 [2024-12-07 04:08:18.717154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717165] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:36.203 [2024-12-07 04:08:18.717188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:36.203 [2024-12-07 04:08:18.717218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:36.203 [2024-12-07 04:08:18.717257] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717280] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:36.203 [2024-12-07 04:08:18.717289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.203 [2024-12-07 04:08:18.717318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:36.203 [2024-12-07 04:08:18.717332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:36.203 [2024-12-07 04:08:18.717341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:36.203 [2024-12-07 04:08:18.717355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:36.203 [2024-12-07 04:08:18.717364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:36.203 [2024-12-07 04:08:18.717381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:36.203 [2024-12-07 04:08:18.717404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:36.203 [2024-12-07 04:08:18.717414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717428] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:36.203 [2024-12-07 04:08:18.717443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:36.203 [2024-12-07 04:08:18.717457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:36.203 [2024-12-07 04:08:18.717482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:36.203 [2024-12-07 04:08:18.717492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:36.203 [2024-12-07 04:08:18.717506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:36.203 [2024-12-07 04:08:18.717516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:36.203 [2024-12-07 04:08:18.717529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:36.203 [2024-12-07 04:08:18.717538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:36.203 [2024-12-07 04:08:18.717553] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:36.203 [2024-12-07 04:08:18.717566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:36.203 [2024-12-07 04:08:18.717594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:36.203 [2024-12-07 04:08:18.717606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:36.203 [2024-12-07 04:08:18.717617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:36.203 [2024-12-07 04:08:18.717630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:36.203 [2024-12-07 04:08:18.717640] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:36.203 [2024-12-07 04:08:18.717654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:36.203 [2024-12-07 04:08:18.717664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:36.203 [2024-12-07 04:08:18.717679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:36.203 [2024-12-07 04:08:18.717689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717739] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:36.203 [2024-12-07 04:08:18.717754] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:36.203 [2024-12-07 04:08:18.717765] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:36.203 [2024-12-07 04:08:18.717795] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:36.203 [2024-12-07 04:08:18.717810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:36.203 [2024-12-07 04:08:18.717820] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:36.203 [2024-12-07 04:08:18.717835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.717846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:36.203 [2024-12-07 04:08:18.717861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.004 ms 00:23:36.203 [2024-12-07 04:08:18.717876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.754538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.754709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:36.203 [2024-12-07 04:08:18.754913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.648 ms 00:23:36.203 [2024-12-07 04:08:18.754972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.755117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.755181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:36.203 [2024-12-07 04:08:18.755280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:36.203 [2024-12-07 04:08:18.755311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.796705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.796834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:36.203 [2024-12-07 04:08:18.797003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.409 ms 00:23:36.203 [2024-12-07 04:08:18.797044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.797156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.797195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:36.203 [2024-12-07 04:08:18.797292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:36.203 [2024-12-07 04:08:18.797330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.797803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.797912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:36.203 [2024-12-07 04:08:18.798004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:23:36.203 [2024-12-07 04:08:18.798043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.798193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.798242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:36.203 [2024-12-07 04:08:18.798323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:36.203 [2024-12-07 04:08:18.798360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.820253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.203 [2024-12-07 04:08:18.820390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:36.203 [2024-12-07 04:08:18.820565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.875 ms 00:23:36.203 [2024-12-07 04:08:18.820643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.203 [2024-12-07 04:08:18.874885] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:36.204 [2024-12-07 04:08:18.875089] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:36.204 [2024-12-07 04:08:18.875238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.204 [2024-12-07 04:08:18.875276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:36.204 [2024-12-07 04:08:18.875315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.529 ms 00:23:36.204 [2024-12-07 04:08:18.875359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.204 [2024-12-07 04:08:18.904183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.204 [2024-12-07 04:08:18.904324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:36.204 [2024-12-07 04:08:18.904499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.759 ms 00:23:36.204 [2024-12-07 04:08:18.904539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.204 [2024-12-07 04:08:18.921783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.204 [2024-12-07 04:08:18.921947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:36.204 [2024-12-07 04:08:18.922045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.096 ms 00:23:36.204 [2024-12-07 04:08:18.922083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:18.939649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:18.939785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:36.464 [2024-12-07 04:08:18.939877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.492 ms 00:23:36.464 [2024-12-07 04:08:18.939914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:18.940748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:18.940870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:36.464 [2024-12-07 04:08:18.940970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:23:36.464 [2024-12-07 04:08:18.941011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.023884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.024139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:36.464 [2024-12-07 04:08:19.024175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.947 ms 00:23:36.464 [2024-12-07 04:08:19.024187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.034525] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:36.464 [2024-12-07 04:08:19.049874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.049942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:36.464 [2024-12-07 04:08:19.049980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.608 ms 00:23:36.464 [2024-12-07 04:08:19.049995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.050089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.050108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:36.464 [2024-12-07 04:08:19.050119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:36.464 [2024-12-07 04:08:19.050134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.050191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.050207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:36.464 [2024-12-07 04:08:19.050225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:36.464 [2024-12-07 04:08:19.050246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.050287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.050303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:36.464 [2024-12-07 04:08:19.050315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:36.464 [2024-12-07 04:08:19.050329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.050372] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:36.464 [2024-12-07 04:08:19.050394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.050410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:36.464 [2024-12-07 04:08:19.050425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:36.464 [2024-12-07 04:08:19.050436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.084825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.084972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:36.464 [2024-12-07 04:08:19.085020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.406 ms 00:23:36.464 [2024-12-07 04:08:19.085032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.085185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.464 [2024-12-07 04:08:19.085200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:36.464 [2024-12-07 04:08:19.085217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:36.464 [2024-12-07 04:08:19.085233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.464 [2024-12-07 04:08:19.086132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:36.464 [2024-12-07 04:08:19.090042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 408.153 ms, result 0 00:23:36.464 [2024-12-07 04:08:19.091218] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:36.464 Some configs were skipped because the RPC state that can call them passed over. 00:23:36.464 04:08:19 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:23:36.723 [2024-12-07 04:08:19.342459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.723 [2024-12-07 04:08:19.342662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:36.723 [2024-12-07 04:08:19.342754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.639 ms 00:23:36.723 [2024-12-07 04:08:19.342802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.723 [2024-12-07 04:08:19.342940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.100 ms, result 0 00:23:36.723 true 00:23:36.723 04:08:19 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:23:36.982 [2024-12-07 04:08:19.565901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:36.982 [2024-12-07 04:08:19.566054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:23:36.982 [2024-12-07 04:08:19.566133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.261 ms 00:23:36.982 [2024-12-07 04:08:19.566171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:36.982 [2024-12-07 04:08:19.566248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.599 ms, result 0 00:23:36.982 true 00:23:36.982 04:08:19 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78719 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78719 ']' 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78719 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78719 00:23:36.982 killing process with pid 78719 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78719' 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78719 00:23:36.982 04:08:19 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78719 00:23:38.365 [2024-12-07 04:08:20.686371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.686424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:38.365 [2024-12-07 04:08:20.686439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:38.365 [2024-12-07 04:08:20.686468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.686494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:38.365 [2024-12-07 04:08:20.690664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.690696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:38.365 [2024-12-07 04:08:20.690714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.155 ms 00:23:38.365 [2024-12-07 04:08:20.690724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.690980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.690994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:38.365 [2024-12-07 04:08:20.691007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.213 ms 00:23:38.365 [2024-12-07 04:08:20.691016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.694362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.694398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:38.365 [2024-12-07 04:08:20.694415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.328 ms 00:23:38.365 [2024-12-07 04:08:20.694426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.699824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.699857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:38.365 [2024-12-07 04:08:20.699873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.368 ms 00:23:38.365 [2024-12-07 04:08:20.699883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.714235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.714293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:38.365 [2024-12-07 04:08:20.714311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.307 ms 00:23:38.365 [2024-12-07 04:08:20.714320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.724578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.724615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:38.365 [2024-12-07 04:08:20.724631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.222 ms 00:23:38.365 [2024-12-07 04:08:20.724641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.724770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.724783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:38.365 [2024-12-07 04:08:20.724796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:38.365 [2024-12-07 04:08:20.724806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.739867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.740028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:38.365 [2024-12-07 04:08:20.740069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.062 ms 00:23:38.365 [2024-12-07 04:08:20.740079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.754570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.754714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:38.365 [2024-12-07 04:08:20.754759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:23:38.365 [2024-12-07 04:08:20.754769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.769716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.769862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:38.365 [2024-12-07 04:08:20.769887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.916 ms 00:23:38.365 [2024-12-07 04:08:20.769897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.784431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.365 [2024-12-07 04:08:20.784462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:38.365 [2024-12-07 04:08:20.784478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.439 ms 00:23:38.365 [2024-12-07 04:08:20.784487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.365 [2024-12-07 04:08:20.784555] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:38.365 [2024-12-07 04:08:20.784573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:38.365 [2024-12-07 04:08:20.784844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.784858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.784868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.784882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.784893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.784906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.784917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.785996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:38.366 [2024-12-07 04:08:20.786504] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:38.366 [2024-12-07 04:08:20.786524] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:23:38.366 [2024-12-07 04:08:20.786538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:38.366 [2024-12-07 04:08:20.786550] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:38.366 [2024-12-07 04:08:20.786559] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:38.366 [2024-12-07 04:08:20.786573] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:38.366 [2024-12-07 04:08:20.786582] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:38.366 [2024-12-07 04:08:20.786595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:38.366 [2024-12-07 04:08:20.786605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:38.366 [2024-12-07 04:08:20.786617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:38.366 [2024-12-07 04:08:20.786626] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:38.366 [2024-12-07 04:08:20.786638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.366 [2024-12-07 04:08:20.786649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:38.366 [2024-12-07 04:08:20.786663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.089 ms 00:23:38.366 [2024-12-07 04:08:20.786674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.366 [2024-12-07 04:08:20.806530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.367 [2024-12-07 04:08:20.806565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:38.367 [2024-12-07 04:08:20.806584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.856 ms 00:23:38.367 [2024-12-07 04:08:20.806595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:20.807165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.367 [2024-12-07 04:08:20.807185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:38.367 [2024-12-07 04:08:20.807202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:23:38.367 [2024-12-07 04:08:20.807212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:20.875163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:20.875307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:38.367 [2024-12-07 04:08:20.875334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:20.875345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:20.875435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:20.875448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:38.367 [2024-12-07 04:08:20.875465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:20.875475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:20.875534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:20.875547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:38.367 [2024-12-07 04:08:20.875636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:20.875647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:20.875669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:20.875679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:38.367 [2024-12-07 04:08:20.875692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:20.875706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:20.995481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:20.995534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:38.367 [2024-12-07 04:08:20.995552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:20.995562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:38.367 [2024-12-07 04:08:21.089413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.089427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.367 [2024-12-07 04:08:21.089542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.089551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.367 [2024-12-07 04:08:21.089604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.089614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.367 [2024-12-07 04:08:21.089744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.089753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:38.367 [2024-12-07 04:08:21.089814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.089824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.367 [2024-12-07 04:08:21.089893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.089902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.089980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:38.367 [2024-12-07 04:08:21.089994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.367 [2024-12-07 04:08:21.090007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:38.367 [2024-12-07 04:08:21.090017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.367 [2024-12-07 04:08:21.090159] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 404.415 ms, result 0 00:23:39.748 04:08:22 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:39.748 [2024-12-07 04:08:22.168206] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:23:39.749 [2024-12-07 04:08:22.168344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78787 ] 00:23:39.749 [2024-12-07 04:08:22.352307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.749 [2024-12-07 04:08:22.457202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.318 [2024-12-07 04:08:22.811854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.318 [2024-12-07 04:08:22.811925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:40.318 [2024-12-07 04:08:22.972857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:22.972906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:40.318 [2024-12-07 04:08:22.972921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:40.318 [2024-12-07 04:08:22.972943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:22.976035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:22.976069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:40.318 [2024-12-07 04:08:22.976082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.061 ms 00:23:40.318 [2024-12-07 04:08:22.976107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:22.976198] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:40.318 [2024-12-07 04:08:22.977177] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:40.318 [2024-12-07 04:08:22.977212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:22.977223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:40.318 [2024-12-07 04:08:22.977233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:23:40.318 [2024-12-07 04:08:22.977243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:22.978839] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:40.318 [2024-12-07 04:08:22.997968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:22.998119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:40.318 [2024-12-07 04:08:22.998140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.161 ms 00:23:40.318 [2024-12-07 04:08:22.998167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:22.998308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:22.998324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:40.318 [2024-12-07 04:08:22.998335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:40.318 [2024-12-07 04:08:22.998345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:23.005004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:23.005138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:40.318 [2024-12-07 04:08:23.005157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.629 ms 00:23:40.318 [2024-12-07 04:08:23.005183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:23.005292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:23.005307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:40.318 [2024-12-07 04:08:23.005317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:23:40.318 [2024-12-07 04:08:23.005327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:23.005357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:23.005368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:40.318 [2024-12-07 04:08:23.005379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:40.318 [2024-12-07 04:08:23.005388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:23.005411] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:23:40.318 [2024-12-07 04:08:23.010069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:23.010097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:40.318 [2024-12-07 04:08:23.010108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:23:40.318 [2024-12-07 04:08:23.010134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:23.010199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.318 [2024-12-07 04:08:23.010212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:40.318 [2024-12-07 04:08:23.010231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:40.318 [2024-12-07 04:08:23.010241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.318 [2024-12-07 04:08:23.010268] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:40.318 [2024-12-07 04:08:23.010292] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:40.318 [2024-12-07 04:08:23.010327] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:40.318 [2024-12-07 04:08:23.010345] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:40.318 [2024-12-07 04:08:23.010433] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:40.318 [2024-12-07 04:08:23.010446] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:40.319 [2024-12-07 04:08:23.010459] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:40.319 [2024-12-07 04:08:23.010475] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:40.319 [2024-12-07 04:08:23.010487] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:40.319 [2024-12-07 04:08:23.010498] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:23:40.319 [2024-12-07 04:08:23.010508] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:40.319 [2024-12-07 04:08:23.010518] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:40.319 [2024-12-07 04:08:23.010527] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:40.319 [2024-12-07 04:08:23.010538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.319 [2024-12-07 04:08:23.010548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:40.319 [2024-12-07 04:08:23.010558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:23:40.319 [2024-12-07 04:08:23.010567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.319 [2024-12-07 04:08:23.010641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.319 [2024-12-07 04:08:23.010654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:40.319 [2024-12-07 04:08:23.010664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:23:40.319 [2024-12-07 04:08:23.010674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.319 [2024-12-07 04:08:23.010762] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:40.319 [2024-12-07 04:08:23.010775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:40.319 [2024-12-07 04:08:23.010786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.319 [2024-12-07 04:08:23.010796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.010807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:40.319 [2024-12-07 04:08:23.010816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.010826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:23:40.319 [2024-12-07 04:08:23.010836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:40.319 [2024-12-07 04:08:23.010845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:23:40.319 [2024-12-07 04:08:23.010855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.319 [2024-12-07 04:08:23.010865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:40.319 [2024-12-07 04:08:23.010884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:23:40.319 [2024-12-07 04:08:23.010893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:40.319 [2024-12-07 04:08:23.010902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:40.319 [2024-12-07 04:08:23.010912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:23:40.319 [2024-12-07 04:08:23.010921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.010950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:40.319 [2024-12-07 04:08:23.010960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:23:40.319 [2024-12-07 04:08:23.010970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.010980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:40.319 [2024-12-07 04:08:23.010989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:23:40.319 [2024-12-07 04:08:23.010998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.319 [2024-12-07 04:08:23.011008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:40.319 [2024-12-07 04:08:23.011017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.319 [2024-12-07 04:08:23.011051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:40.319 [2024-12-07 04:08:23.011060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.319 [2024-12-07 04:08:23.011079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:40.319 [2024-12-07 04:08:23.011088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:40.319 [2024-12-07 04:08:23.011105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:40.319 [2024-12-07 04:08:23.011114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.319 [2024-12-07 04:08:23.011132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:40.319 [2024-12-07 04:08:23.011148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:23:40.319 [2024-12-07 04:08:23.011157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:40.319 [2024-12-07 04:08:23.011166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:40.319 [2024-12-07 04:08:23.011175] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:23:40.319 [2024-12-07 04:08:23.011184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:40.319 [2024-12-07 04:08:23.011203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:23:40.319 [2024-12-07 04:08:23.011213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011222] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:40.319 [2024-12-07 04:08:23.011232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:40.319 [2024-12-07 04:08:23.011246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:40.319 [2024-12-07 04:08:23.011256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:40.319 [2024-12-07 04:08:23.011267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:40.319 [2024-12-07 04:08:23.011277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:40.319 [2024-12-07 04:08:23.011286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:40.319 [2024-12-07 04:08:23.011295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:40.319 [2024-12-07 04:08:23.011304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:40.319 [2024-12-07 04:08:23.011313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:40.319 [2024-12-07 04:08:23.011324] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:40.319 [2024-12-07 04:08:23.011337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:23:40.319 [2024-12-07 04:08:23.011358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:23:40.319 [2024-12-07 04:08:23.011369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:23:40.319 [2024-12-07 04:08:23.011380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:23:40.319 [2024-12-07 04:08:23.011390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:23:40.319 [2024-12-07 04:08:23.011400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:23:40.319 [2024-12-07 04:08:23.011410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:23:40.319 [2024-12-07 04:08:23.011420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:23:40.319 [2024-12-07 04:08:23.011432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:23:40.319 [2024-12-07 04:08:23.011442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:23:40.319 [2024-12-07 04:08:23.011491] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:40.319 [2024-12-07 04:08:23.011503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:40.319 [2024-12-07 04:08:23.011524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:40.319 [2024-12-07 04:08:23.011535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:40.319 [2024-12-07 04:08:23.011547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:40.319 [2024-12-07 04:08:23.011558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.319 [2024-12-07 04:08:23.011573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:40.319 [2024-12-07 04:08:23.011583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.849 ms 00:23:40.319 [2024-12-07 04:08:23.011592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.319 [2024-12-07 04:08:23.048857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.319 [2024-12-07 04:08:23.048892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:40.319 [2024-12-07 04:08:23.048906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.264 ms 00:23:40.320 [2024-12-07 04:08:23.048917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.320 [2024-12-07 04:08:23.049048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.320 [2024-12-07 04:08:23.049062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:40.320 [2024-12-07 04:08:23.049074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:40.320 [2024-12-07 04:08:23.049084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.123853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.123895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:40.579 [2024-12-07 04:08:23.123913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.868 ms 00:23:40.579 [2024-12-07 04:08:23.123925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.124058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.124072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:40.579 [2024-12-07 04:08:23.124083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:40.579 [2024-12-07 04:08:23.124093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.124541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.124560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:40.579 [2024-12-07 04:08:23.124578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:23:40.579 [2024-12-07 04:08:23.124588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.124702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.124717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:40.579 [2024-12-07 04:08:23.124728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:23:40.579 [2024-12-07 04:08:23.124738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.145041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.145188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:40.579 [2024-12-07 04:08:23.145277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.313 ms 00:23:40.579 [2024-12-07 04:08:23.145315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.164353] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:40.579 [2024-12-07 04:08:23.164513] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:40.579 [2024-12-07 04:08:23.164549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.164560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:40.579 [2024-12-07 04:08:23.164572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.128 ms 00:23:40.579 [2024-12-07 04:08:23.164582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.193464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.193502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:40.579 [2024-12-07 04:08:23.193516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.850 ms 00:23:40.579 [2024-12-07 04:08:23.193527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.211345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.211380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:40.579 [2024-12-07 04:08:23.211392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.767 ms 00:23:40.579 [2024-12-07 04:08:23.211402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.228887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.228920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:40.579 [2024-12-07 04:08:23.228957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.439 ms 00:23:40.579 [2024-12-07 04:08:23.228967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.229757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.229790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:40.579 [2024-12-07 04:08:23.229802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:23:40.579 [2024-12-07 04:08:23.229812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.579 [2024-12-07 04:08:23.311516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.579 [2024-12-07 04:08:23.311590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:40.580 [2024-12-07 04:08:23.311608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.808 ms 00:23:40.580 [2024-12-07 04:08:23.311635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.322274] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:23:40.839 [2024-12-07 04:08:23.337984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.839 [2024-12-07 04:08:23.338029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:40.839 [2024-12-07 04:08:23.338044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.294 ms 00:23:40.839 [2024-12-07 04:08:23.338060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.338176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.839 [2024-12-07 04:08:23.338188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:40.839 [2024-12-07 04:08:23.338199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:40.839 [2024-12-07 04:08:23.338209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.338285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.839 [2024-12-07 04:08:23.338297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:40.839 [2024-12-07 04:08:23.338307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:40.839 [2024-12-07 04:08:23.338322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.338358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.839 [2024-12-07 04:08:23.338371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:40.839 [2024-12-07 04:08:23.338382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:40.839 [2024-12-07 04:08:23.338391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.338427] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:40.839 [2024-12-07 04:08:23.338439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.839 [2024-12-07 04:08:23.338449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:40.839 [2024-12-07 04:08:23.338459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:40.839 [2024-12-07 04:08:23.338469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.373251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.839 [2024-12-07 04:08:23.373288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:40.839 [2024-12-07 04:08:23.373302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.815 ms 00:23:40.839 [2024-12-07 04:08:23.373312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.839 [2024-12-07 04:08:23.373416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:40.840 [2024-12-07 04:08:23.373429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:40.840 [2024-12-07 04:08:23.373440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:23:40.840 [2024-12-07 04:08:23.373450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:40.840 [2024-12-07 04:08:23.374429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:40.840 [2024-12-07 04:08:23.378403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 401.922 ms, result 0 00:23:40.840 [2024-12-07 04:08:23.379307] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:40.840 [2024-12-07 04:08:23.396802] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:41.777  [2024-12-07T04:08:25.893Z] Copying: 28/256 [MB] (28 MBps) [2024-12-07T04:08:26.461Z] Copying: 52/256 [MB] (24 MBps) [2024-12-07T04:08:27.841Z] Copying: 77/256 [MB] (24 MBps) [2024-12-07T04:08:28.777Z] Copying: 102/256 [MB] (24 MBps) [2024-12-07T04:08:29.709Z] Copying: 126/256 [MB] (24 MBps) [2024-12-07T04:08:30.676Z] Copying: 151/256 [MB] (24 MBps) [2024-12-07T04:08:31.612Z] Copying: 175/256 [MB] (24 MBps) [2024-12-07T04:08:32.547Z] Copying: 200/256 [MB] (24 MBps) [2024-12-07T04:08:33.483Z] Copying: 224/256 [MB] (24 MBps) [2024-12-07T04:08:33.742Z] Copying: 249/256 [MB] (24 MBps) [2024-12-07T04:08:34.002Z] Copying: 256/256 [MB] (average 24 MBps)[2024-12-07 04:08:33.868470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:51.266 [2024-12-07 04:08:33.898725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.898780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:51.266 [2024-12-07 04:08:33.898812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:51.266 [2024-12-07 04:08:33.898828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.898863] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:23:51.266 [2024-12-07 04:08:33.903304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.903335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:51.266 [2024-12-07 04:08:33.903349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.425 ms 00:23:51.266 [2024-12-07 04:08:33.903359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.903604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.903618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:51.266 [2024-12-07 04:08:33.903631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:23:51.266 [2024-12-07 04:08:33.903641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.906525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.906548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:51.266 [2024-12-07 04:08:33.906560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:23:51.266 [2024-12-07 04:08:33.906571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.912150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.912181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:51.266 [2024-12-07 04:08:33.912193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.567 ms 00:23:51.266 [2024-12-07 04:08:33.912202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.946106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.946145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:51.266 [2024-12-07 04:08:33.946159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.885 ms 00:23:51.266 [2024-12-07 04:08:33.946168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.966040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.966199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:51.266 [2024-12-07 04:08:33.966235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.863 ms 00:23:51.266 [2024-12-07 04:08:33.966246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.266 [2024-12-07 04:08:33.966402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.266 [2024-12-07 04:08:33.966416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:51.266 [2024-12-07 04:08:33.966439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:23:51.266 [2024-12-07 04:08:33.966449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.527 [2024-12-07 04:08:34.001254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.527 [2024-12-07 04:08:34.001289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:51.527 [2024-12-07 04:08:34.001303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.842 ms 00:23:51.527 [2024-12-07 04:08:34.001327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.527 [2024-12-07 04:08:34.035978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.527 [2024-12-07 04:08:34.036011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:51.527 [2024-12-07 04:08:34.036025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.666 ms 00:23:51.527 [2024-12-07 04:08:34.036035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.527 [2024-12-07 04:08:34.071132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.527 [2024-12-07 04:08:34.071167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:51.527 [2024-12-07 04:08:34.071180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.116 ms 00:23:51.527 [2024-12-07 04:08:34.071190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.527 [2024-12-07 04:08:34.105864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.527 [2024-12-07 04:08:34.106014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:51.527 [2024-12-07 04:08:34.106051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.652 ms 00:23:51.527 [2024-12-07 04:08:34.106061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.527 [2024-12-07 04:08:34.106103] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:51.527 [2024-12-07 04:08:34.106119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:51.527 [2024-12-07 04:08:34.106259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.106999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:51.528 [2024-12-07 04:08:34.107219] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:51.528 [2024-12-07 04:08:34.107230] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 03b91864-17ea-4316-91d7-3cf42d3b8eda 00:23:51.529 [2024-12-07 04:08:34.107241] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:51.529 [2024-12-07 04:08:34.107251] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:51.529 [2024-12-07 04:08:34.107261] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:51.529 [2024-12-07 04:08:34.107271] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:51.529 [2024-12-07 04:08:34.107280] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:51.529 [2024-12-07 04:08:34.107290] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:51.529 [2024-12-07 04:08:34.107305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:51.529 [2024-12-07 04:08:34.107315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:51.529 [2024-12-07 04:08:34.107324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:51.529 [2024-12-07 04:08:34.107334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.529 [2024-12-07 04:08:34.107344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:51.529 [2024-12-07 04:08:34.107355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:23:51.529 [2024-12-07 04:08:34.107364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.529 [2024-12-07 04:08:34.126379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.529 [2024-12-07 04:08:34.126408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:51.529 [2024-12-07 04:08:34.126420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.025 ms 00:23:51.529 [2024-12-07 04:08:34.126429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.529 [2024-12-07 04:08:34.126981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:51.529 [2024-12-07 04:08:34.126994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:51.529 [2024-12-07 04:08:34.127004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:23:51.529 [2024-12-07 04:08:34.127014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.529 [2024-12-07 04:08:34.179394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.529 [2024-12-07 04:08:34.179426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:51.529 [2024-12-07 04:08:34.179438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.529 [2024-12-07 04:08:34.179468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.529 [2024-12-07 04:08:34.179558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.529 [2024-12-07 04:08:34.179569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:51.529 [2024-12-07 04:08:34.179580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.529 [2024-12-07 04:08:34.179590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.529 [2024-12-07 04:08:34.179637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.529 [2024-12-07 04:08:34.179649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:51.529 [2024-12-07 04:08:34.179660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.529 [2024-12-07 04:08:34.179669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.529 [2024-12-07 04:08:34.179692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.529 [2024-12-07 04:08:34.179702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:51.529 [2024-12-07 04:08:34.179713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.529 [2024-12-07 04:08:34.179722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.296764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.296819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:51.789 [2024-12-07 04:08:34.296834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.296845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:51.789 [2024-12-07 04:08:34.391170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:51.789 [2024-12-07 04:08:34.391264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:51.789 [2024-12-07 04:08:34.391327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:51.789 [2024-12-07 04:08:34.391461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:51.789 [2024-12-07 04:08:34.391530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:51.789 [2024-12-07 04:08:34.391600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:51.789 [2024-12-07 04:08:34.391665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:51.789 [2024-12-07 04:08:34.391675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:51.789 [2024-12-07 04:08:34.391684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:51.789 [2024-12-07 04:08:34.391815] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.907 ms, result 0 00:23:52.727 00:23:52.727 00:23:52.727 04:08:35 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.296 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:23:53.296 04:08:35 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78719 00:23:53.296 04:08:35 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78719 ']' 00:23:53.296 04:08:35 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78719 00:23:53.296 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78719) - No such process 00:23:53.296 Process with pid 78719 is not found 00:23:53.296 04:08:35 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78719 is not found' 00:23:53.296 ************************************ 00:23:53.296 END TEST ftl_trim 00:23:53.296 ************************************ 00:23:53.296 00:23:53.296 real 1m7.928s 00:23:53.296 user 1m29.863s 00:23:53.296 sys 0m6.596s 00:23:53.296 04:08:35 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.296 04:08:35 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:23:53.556 04:08:36 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.556 04:08:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:53.556 04:08:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.556 04:08:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:53.556 ************************************ 00:23:53.556 START TEST ftl_restore 00:23:53.556 ************************************ 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:23:53.556 * Looking for test storage... 00:23:53.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.556 04:08:36 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.556 --rc genhtml_branch_coverage=1 00:23:53.556 --rc genhtml_function_coverage=1 00:23:53.556 --rc genhtml_legend=1 00:23:53.556 --rc geninfo_all_blocks=1 00:23:53.556 --rc geninfo_unexecuted_blocks=1 00:23:53.556 00:23:53.556 ' 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.556 --rc genhtml_branch_coverage=1 00:23:53.556 --rc genhtml_function_coverage=1 00:23:53.556 --rc genhtml_legend=1 00:23:53.556 --rc geninfo_all_blocks=1 00:23:53.556 --rc geninfo_unexecuted_blocks=1 00:23:53.556 00:23:53.556 ' 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.556 --rc genhtml_branch_coverage=1 00:23:53.556 --rc genhtml_function_coverage=1 00:23:53.556 --rc genhtml_legend=1 00:23:53.556 --rc geninfo_all_blocks=1 00:23:53.556 --rc geninfo_unexecuted_blocks=1 00:23:53.556 00:23:53.556 ' 00:23:53.556 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.556 --rc genhtml_branch_coverage=1 00:23:53.556 --rc genhtml_function_coverage=1 00:23:53.556 --rc genhtml_legend=1 00:23:53.556 --rc geninfo_all_blocks=1 00:23:53.556 --rc geninfo_unexecuted_blocks=1 00:23:53.556 00:23:53.556 ' 00:23:53.556 04:08:36 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.c2j5pN2IM5 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=78997 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:53.817 04:08:36 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 78997 00:23:53.817 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 78997 ']' 00:23:53.817 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.817 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.817 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.817 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.817 04:08:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:53.817 [2024-12-07 04:08:36.438380] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:23:53.817 [2024-12-07 04:08:36.438702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78997 ] 00:23:54.077 [2024-12-07 04:08:36.618189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.077 [2024-12-07 04:08:36.723621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.016 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.016 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:23:55.016 04:08:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:55.016 04:08:37 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:23:55.016 04:08:37 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:55.016 04:08:37 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:23:55.016 04:08:37 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:23:55.016 04:08:37 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:55.276 04:08:37 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:55.276 04:08:37 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:23:55.276 04:08:37 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:55.276 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:23:55.276 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:55.276 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:55.276 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:55.276 04:08:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:55.536 { 00:23:55.536 "name": "nvme0n1", 00:23:55.536 "aliases": [ 00:23:55.536 "a1db7e98-2d2a-42a3-87e7-fbfad04d1f96" 00:23:55.536 ], 00:23:55.536 "product_name": "NVMe disk", 00:23:55.536 "block_size": 4096, 00:23:55.536 "num_blocks": 1310720, 00:23:55.536 "uuid": "a1db7e98-2d2a-42a3-87e7-fbfad04d1f96", 00:23:55.536 "numa_id": -1, 00:23:55.536 "assigned_rate_limits": { 00:23:55.536 "rw_ios_per_sec": 0, 00:23:55.536 "rw_mbytes_per_sec": 0, 00:23:55.536 "r_mbytes_per_sec": 0, 00:23:55.536 "w_mbytes_per_sec": 0 00:23:55.536 }, 00:23:55.536 "claimed": true, 00:23:55.536 "claim_type": "read_many_write_one", 00:23:55.536 "zoned": false, 00:23:55.536 "supported_io_types": { 00:23:55.536 "read": true, 00:23:55.536 "write": true, 00:23:55.536 "unmap": true, 00:23:55.536 "flush": true, 00:23:55.536 "reset": true, 00:23:55.536 "nvme_admin": true, 00:23:55.536 "nvme_io": true, 00:23:55.536 "nvme_io_md": false, 00:23:55.536 "write_zeroes": true, 00:23:55.536 "zcopy": false, 00:23:55.536 "get_zone_info": false, 00:23:55.536 "zone_management": false, 00:23:55.536 "zone_append": false, 00:23:55.536 "compare": true, 00:23:55.536 "compare_and_write": false, 00:23:55.536 "abort": true, 00:23:55.536 "seek_hole": false, 00:23:55.536 "seek_data": false, 00:23:55.536 "copy": true, 00:23:55.536 "nvme_iov_md": false 00:23:55.536 }, 00:23:55.536 "driver_specific": { 00:23:55.536 "nvme": [ 00:23:55.536 { 00:23:55.536 "pci_address": "0000:00:11.0", 00:23:55.536 "trid": { 00:23:55.536 "trtype": "PCIe", 00:23:55.536 "traddr": "0000:00:11.0" 00:23:55.536 }, 00:23:55.536 "ctrlr_data": { 00:23:55.536 "cntlid": 0, 00:23:55.536 "vendor_id": "0x1b36", 00:23:55.536 "model_number": "QEMU NVMe Ctrl", 00:23:55.536 "serial_number": "12341", 00:23:55.536 "firmware_revision": "8.0.0", 00:23:55.536 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:55.536 "oacs": { 00:23:55.536 "security": 0, 00:23:55.536 "format": 1, 00:23:55.536 "firmware": 0, 00:23:55.536 "ns_manage": 1 00:23:55.536 }, 00:23:55.536 "multi_ctrlr": false, 00:23:55.536 "ana_reporting": false 00:23:55.536 }, 00:23:55.536 "vs": { 00:23:55.536 "nvme_version": "1.4" 00:23:55.536 }, 00:23:55.536 "ns_data": { 00:23:55.536 "id": 1, 00:23:55.536 "can_share": false 00:23:55.536 } 00:23:55.536 } 00:23:55.536 ], 00:23:55.536 "mp_policy": "active_passive" 00:23:55.536 } 00:23:55.536 } 00:23:55.536 ]' 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:23:55.536 04:08:38 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:23:55.536 04:08:38 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:23:55.536 04:08:38 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:55.536 04:08:38 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:23:55.536 04:08:38 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:55.536 04:08:38 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:55.796 04:08:38 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=af39bbc6-c6f0-4389-b7ef-c7d8d025ef17 00:23:55.796 04:08:38 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:23:55.796 04:08:38 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u af39bbc6-c6f0-4389-b7ef-c7d8d025ef17 00:23:56.055 04:08:38 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:56.315 04:08:38 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=4c8dd4ab-225b-4037-af49-e1e25b8bf6bd 00:23:56.315 04:08:38 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4c8dd4ab-225b-4037-af49-e1e25b8bf6bd 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:23:56.315 04:08:39 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:56.315 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:56.315 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:56.315 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:56.315 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:56.315 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:56.576 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:56.576 { 00:23:56.576 "name": "3225f89a-00b9-4fa6-9c62-22d2c005d33b", 00:23:56.576 "aliases": [ 00:23:56.576 "lvs/nvme0n1p0" 00:23:56.576 ], 00:23:56.576 "product_name": "Logical Volume", 00:23:56.576 "block_size": 4096, 00:23:56.576 "num_blocks": 26476544, 00:23:56.576 "uuid": "3225f89a-00b9-4fa6-9c62-22d2c005d33b", 00:23:56.576 "assigned_rate_limits": { 00:23:56.576 "rw_ios_per_sec": 0, 00:23:56.576 "rw_mbytes_per_sec": 0, 00:23:56.576 "r_mbytes_per_sec": 0, 00:23:56.576 "w_mbytes_per_sec": 0 00:23:56.576 }, 00:23:56.576 "claimed": false, 00:23:56.576 "zoned": false, 00:23:56.576 "supported_io_types": { 00:23:56.576 "read": true, 00:23:56.576 "write": true, 00:23:56.576 "unmap": true, 00:23:56.576 "flush": false, 00:23:56.576 "reset": true, 00:23:56.576 "nvme_admin": false, 00:23:56.576 "nvme_io": false, 00:23:56.576 "nvme_io_md": false, 00:23:56.576 "write_zeroes": true, 00:23:56.576 "zcopy": false, 00:23:56.576 "get_zone_info": false, 00:23:56.576 "zone_management": false, 00:23:56.576 "zone_append": false, 00:23:56.576 "compare": false, 00:23:56.576 "compare_and_write": false, 00:23:56.576 "abort": false, 00:23:56.576 "seek_hole": true, 00:23:56.576 "seek_data": true, 00:23:56.576 "copy": false, 00:23:56.576 "nvme_iov_md": false 00:23:56.576 }, 00:23:56.576 "driver_specific": { 00:23:56.576 "lvol": { 00:23:56.576 "lvol_store_uuid": "4c8dd4ab-225b-4037-af49-e1e25b8bf6bd", 00:23:56.576 "base_bdev": "nvme0n1", 00:23:56.576 "thin_provision": true, 00:23:56.576 "num_allocated_clusters": 0, 00:23:56.576 "snapshot": false, 00:23:56.576 "clone": false, 00:23:56.576 "esnap_clone": false 00:23:56.576 } 00:23:56.576 } 00:23:56.576 } 00:23:56.576 ]' 00:23:56.576 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:56.576 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:56.576 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:56.836 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:56.836 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:56.836 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:56.836 04:08:39 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:23:56.836 04:08:39 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:23:56.836 04:08:39 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:57.096 04:08:39 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:57.096 04:08:39 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:57.096 04:08:39 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.096 { 00:23:57.096 "name": "3225f89a-00b9-4fa6-9c62-22d2c005d33b", 00:23:57.096 "aliases": [ 00:23:57.096 "lvs/nvme0n1p0" 00:23:57.096 ], 00:23:57.096 "product_name": "Logical Volume", 00:23:57.096 "block_size": 4096, 00:23:57.096 "num_blocks": 26476544, 00:23:57.096 "uuid": "3225f89a-00b9-4fa6-9c62-22d2c005d33b", 00:23:57.096 "assigned_rate_limits": { 00:23:57.096 "rw_ios_per_sec": 0, 00:23:57.096 "rw_mbytes_per_sec": 0, 00:23:57.096 "r_mbytes_per_sec": 0, 00:23:57.096 "w_mbytes_per_sec": 0 00:23:57.096 }, 00:23:57.096 "claimed": false, 00:23:57.096 "zoned": false, 00:23:57.096 "supported_io_types": { 00:23:57.096 "read": true, 00:23:57.096 "write": true, 00:23:57.096 "unmap": true, 00:23:57.096 "flush": false, 00:23:57.096 "reset": true, 00:23:57.096 "nvme_admin": false, 00:23:57.096 "nvme_io": false, 00:23:57.096 "nvme_io_md": false, 00:23:57.096 "write_zeroes": true, 00:23:57.096 "zcopy": false, 00:23:57.096 "get_zone_info": false, 00:23:57.096 "zone_management": false, 00:23:57.096 "zone_append": false, 00:23:57.096 "compare": false, 00:23:57.096 "compare_and_write": false, 00:23:57.096 "abort": false, 00:23:57.096 "seek_hole": true, 00:23:57.096 "seek_data": true, 00:23:57.096 "copy": false, 00:23:57.096 "nvme_iov_md": false 00:23:57.096 }, 00:23:57.096 "driver_specific": { 00:23:57.096 "lvol": { 00:23:57.096 "lvol_store_uuid": "4c8dd4ab-225b-4037-af49-e1e25b8bf6bd", 00:23:57.096 "base_bdev": "nvme0n1", 00:23:57.096 "thin_provision": true, 00:23:57.096 "num_allocated_clusters": 0, 00:23:57.096 "snapshot": false, 00:23:57.096 "clone": false, 00:23:57.096 "esnap_clone": false 00:23:57.096 } 00:23:57.096 } 00:23:57.096 } 00:23:57.096 ]' 00:23:57.096 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.354 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.354 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:57.355 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:57.355 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:57.355 04:08:39 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:57.355 04:08:39 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:23:57.355 04:08:39 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:57.614 04:08:40 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:57.614 04:08:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3225f89a-00b9-4fa6-9c62-22d2c005d33b 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:23:57.614 { 00:23:57.614 "name": "3225f89a-00b9-4fa6-9c62-22d2c005d33b", 00:23:57.614 "aliases": [ 00:23:57.614 "lvs/nvme0n1p0" 00:23:57.614 ], 00:23:57.614 "product_name": "Logical Volume", 00:23:57.614 "block_size": 4096, 00:23:57.614 "num_blocks": 26476544, 00:23:57.614 "uuid": "3225f89a-00b9-4fa6-9c62-22d2c005d33b", 00:23:57.614 "assigned_rate_limits": { 00:23:57.614 "rw_ios_per_sec": 0, 00:23:57.614 "rw_mbytes_per_sec": 0, 00:23:57.614 "r_mbytes_per_sec": 0, 00:23:57.614 "w_mbytes_per_sec": 0 00:23:57.614 }, 00:23:57.614 "claimed": false, 00:23:57.614 "zoned": false, 00:23:57.614 "supported_io_types": { 00:23:57.614 "read": true, 00:23:57.614 "write": true, 00:23:57.614 "unmap": true, 00:23:57.614 "flush": false, 00:23:57.614 "reset": true, 00:23:57.614 "nvme_admin": false, 00:23:57.614 "nvme_io": false, 00:23:57.614 "nvme_io_md": false, 00:23:57.614 "write_zeroes": true, 00:23:57.614 "zcopy": false, 00:23:57.614 "get_zone_info": false, 00:23:57.614 "zone_management": false, 00:23:57.614 "zone_append": false, 00:23:57.614 "compare": false, 00:23:57.614 "compare_and_write": false, 00:23:57.614 "abort": false, 00:23:57.614 "seek_hole": true, 00:23:57.614 "seek_data": true, 00:23:57.614 "copy": false, 00:23:57.614 "nvme_iov_md": false 00:23:57.614 }, 00:23:57.614 "driver_specific": { 00:23:57.614 "lvol": { 00:23:57.614 "lvol_store_uuid": "4c8dd4ab-225b-4037-af49-e1e25b8bf6bd", 00:23:57.614 "base_bdev": "nvme0n1", 00:23:57.614 "thin_provision": true, 00:23:57.614 "num_allocated_clusters": 0, 00:23:57.614 "snapshot": false, 00:23:57.614 "clone": false, 00:23:57.614 "esnap_clone": false 00:23:57.614 } 00:23:57.614 } 00:23:57.614 } 00:23:57.614 ]' 00:23:57.614 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:23:57.875 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:23:57.875 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:23:57.875 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:23:57.875 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:23:57.875 04:08:40 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3225f89a-00b9-4fa6-9c62-22d2c005d33b --l2p_dram_limit 10' 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:23:57.875 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:23:57.875 04:08:40 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3225f89a-00b9-4fa6-9c62-22d2c005d33b --l2p_dram_limit 10 -c nvc0n1p0 00:23:57.875 [2024-12-07 04:08:40.567636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.567690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:57.875 [2024-12-07 04:08:40.567726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:57.875 [2024-12-07 04:08:40.567737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.567804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.567816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:57.875 [2024-12-07 04:08:40.567829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:57.875 [2024-12-07 04:08:40.567839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.567868] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:57.875 [2024-12-07 04:08:40.568914] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:57.875 [2024-12-07 04:08:40.568962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.568974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:57.875 [2024-12-07 04:08:40.568989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:23:57.875 [2024-12-07 04:08:40.568999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.569083] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b6778f2e-d435-43bc-a468-916c780de568 00:23:57.875 [2024-12-07 04:08:40.570546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.570713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:57.875 [2024-12-07 04:08:40.570736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:23:57.875 [2024-12-07 04:08:40.570750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.578678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.578816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:57.875 [2024-12-07 04:08:40.578901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.886 ms 00:23:57.875 [2024-12-07 04:08:40.578962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.579091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.579227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:57.875 [2024-12-07 04:08:40.579308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:23:57.875 [2024-12-07 04:08:40.579347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.579432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.579474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:57.875 [2024-12-07 04:08:40.579509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:57.875 [2024-12-07 04:08:40.579614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.579684] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:57.875 [2024-12-07 04:08:40.585107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.585226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:57.875 [2024-12-07 04:08:40.585320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.438 ms 00:23:57.875 [2024-12-07 04:08:40.585356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.585421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.585454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:57.875 [2024-12-07 04:08:40.585488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:57.875 [2024-12-07 04:08:40.585519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.585578] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:57.875 [2024-12-07 04:08:40.585798] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:57.875 [2024-12-07 04:08:40.585866] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:57.875 [2024-12-07 04:08:40.585987] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:57.875 [2024-12-07 04:08:40.586045] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:57.875 [2024-12-07 04:08:40.586215] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:57.875 [2024-12-07 04:08:40.586324] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:57.875 [2024-12-07 04:08:40.586361] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:57.875 [2024-12-07 04:08:40.586396] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:57.875 [2024-12-07 04:08:40.586427] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:57.875 [2024-12-07 04:08:40.586507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.586552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:57.875 [2024-12-07 04:08:40.586588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.930 ms 00:23:57.875 [2024-12-07 04:08:40.586619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.586727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.875 [2024-12-07 04:08:40.586788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:57.875 [2024-12-07 04:08:40.586822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:23:57.875 [2024-12-07 04:08:40.586855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.875 [2024-12-07 04:08:40.586983] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:57.875 [2024-12-07 04:08:40.587097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:57.875 [2024-12-07 04:08:40.587133] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.875 [2024-12-07 04:08:40.587163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.875 [2024-12-07 04:08:40.587243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:57.875 [2024-12-07 04:08:40.587279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:57.875 [2024-12-07 04:08:40.587313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:57.875 [2024-12-07 04:08:40.587344] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:57.875 [2024-12-07 04:08:40.587441] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:57.875 [2024-12-07 04:08:40.587504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.875 [2024-12-07 04:08:40.587539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:57.875 [2024-12-07 04:08:40.587569] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:57.875 [2024-12-07 04:08:40.587602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:57.875 [2024-12-07 04:08:40.587631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:57.875 [2024-12-07 04:08:40.587664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:57.875 [2024-12-07 04:08:40.587749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.875 [2024-12-07 04:08:40.587792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:57.875 [2024-12-07 04:08:40.587822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:57.875 [2024-12-07 04:08:40.587854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.875 [2024-12-07 04:08:40.587885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:57.875 [2024-12-07 04:08:40.587917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:57.875 [2024-12-07 04:08:40.588014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.875 [2024-12-07 04:08:40.588054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:57.875 [2024-12-07 04:08:40.588093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:57.875 [2024-12-07 04:08:40.588126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.876 [2024-12-07 04:08:40.588156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:57.876 [2024-12-07 04:08:40.588237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:57.876 [2024-12-07 04:08:40.588272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.876 [2024-12-07 04:08:40.588305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:57.876 [2024-12-07 04:08:40.588335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:57.876 [2024-12-07 04:08:40.588367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:57.876 [2024-12-07 04:08:40.588434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:57.876 [2024-12-07 04:08:40.588526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:57.876 [2024-12-07 04:08:40.588592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.876 [2024-12-07 04:08:40.588632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:57.876 [2024-12-07 04:08:40.588663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:57.876 [2024-12-07 04:08:40.588698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:57.876 [2024-12-07 04:08:40.588852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:57.876 [2024-12-07 04:08:40.588892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:57.876 [2024-12-07 04:08:40.588922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.876 [2024-12-07 04:08:40.588970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:57.876 [2024-12-07 04:08:40.589001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:57.876 [2024-12-07 04:08:40.589077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.876 [2024-12-07 04:08:40.589112] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:57.876 [2024-12-07 04:08:40.589147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:57.876 [2024-12-07 04:08:40.589177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:57.876 [2024-12-07 04:08:40.589210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:57.876 [2024-12-07 04:08:40.589242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:57.876 [2024-12-07 04:08:40.589261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:57.876 [2024-12-07 04:08:40.589271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:57.876 [2024-12-07 04:08:40.589284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:57.876 [2024-12-07 04:08:40.589293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:57.876 [2024-12-07 04:08:40.589306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:57.876 [2024-12-07 04:08:40.589318] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:57.876 [2024-12-07 04:08:40.589337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:57.876 [2024-12-07 04:08:40.589363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:57.876 [2024-12-07 04:08:40.589374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:57.876 [2024-12-07 04:08:40.589387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:57.876 [2024-12-07 04:08:40.589398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:57.876 [2024-12-07 04:08:40.589411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:57.876 [2024-12-07 04:08:40.589422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:57.876 [2024-12-07 04:08:40.589437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:57.876 [2024-12-07 04:08:40.589447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:57.876 [2024-12-07 04:08:40.589463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:57.876 [2024-12-07 04:08:40.589522] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:57.876 [2024-12-07 04:08:40.589536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:57.876 [2024-12-07 04:08:40.589561] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:57.876 [2024-12-07 04:08:40.589572] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:57.876 [2024-12-07 04:08:40.589585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:57.876 [2024-12-07 04:08:40.589598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:57.876 [2024-12-07 04:08:40.589611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:57.876 [2024-12-07 04:08:40.589622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.689 ms 00:23:57.876 [2024-12-07 04:08:40.589635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:57.876 [2024-12-07 04:08:40.589718] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:57.876 [2024-12-07 04:08:40.589739] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:24:02.075 [2024-12-07 04:08:44.105328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.105639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:24:02.075 [2024-12-07 04:08:44.105736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3521.315 ms 00:24:02.075 [2024-12-07 04:08:44.105779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.142496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.142739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.075 [2024-12-07 04:08:44.142862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.413 ms 00:24:02.075 [2024-12-07 04:08:44.142908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.143080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.143193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:02.075 [2024-12-07 04:08:44.143265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:24:02.075 [2024-12-07 04:08:44.143312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.187273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.187459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.075 [2024-12-07 04:08:44.187600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.947 ms 00:24:02.075 [2024-12-07 04:08:44.187645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.187704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.187802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.075 [2024-12-07 04:08:44.187840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:02.075 [2024-12-07 04:08:44.187884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.188468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.188599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.075 [2024-12-07 04:08:44.188684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:24:02.075 [2024-12-07 04:08:44.188724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.188898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.188962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.075 [2024-12-07 04:08:44.189183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:24:02.075 [2024-12-07 04:08:44.189206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.209028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.209080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.075 [2024-12-07 04:08:44.209094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.827 ms 00:24:02.075 [2024-12-07 04:08:44.209107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.238282] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.075 [2024-12-07 04:08:44.241780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.241815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.075 [2024-12-07 04:08:44.241836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.638 ms 00:24:02.075 [2024-12-07 04:08:44.241850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.336661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.336713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:24:02.075 [2024-12-07 04:08:44.336732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.924 ms 00:24:02.075 [2024-12-07 04:08:44.336743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.336925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.336948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:02.075 [2024-12-07 04:08:44.336964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:24:02.075 [2024-12-07 04:08:44.336974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.372056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.372093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:24:02.075 [2024-12-07 04:08:44.372126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.066 ms 00:24:02.075 [2024-12-07 04:08:44.372141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.408959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.408994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:24:02.075 [2024-12-07 04:08:44.409012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.831 ms 00:24:02.075 [2024-12-07 04:08:44.409023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.409736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.409763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:02.075 [2024-12-07 04:08:44.409782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.673 ms 00:24:02.075 [2024-12-07 04:08:44.409793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.508902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.508946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:24:02.075 [2024-12-07 04:08:44.508967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.212 ms 00:24:02.075 [2024-12-07 04:08:44.508977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.544736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.544773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:24:02.075 [2024-12-07 04:08:44.544790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.736 ms 00:24:02.075 [2024-12-07 04:08:44.544800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.578631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.578667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:24:02.075 [2024-12-07 04:08:44.578682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.841 ms 00:24:02.075 [2024-12-07 04:08:44.578692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.612959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.612995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:02.075 [2024-12-07 04:08:44.613011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.278 ms 00:24:02.075 [2024-12-07 04:08:44.613021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.613067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.613079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:02.075 [2024-12-07 04:08:44.613094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:02.075 [2024-12-07 04:08:44.613105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.613198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.075 [2024-12-07 04:08:44.613213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:02.075 [2024-12-07 04:08:44.613226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:24:02.075 [2024-12-07 04:08:44.613235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.075 [2024-12-07 04:08:44.614344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4052.825 ms, result 0 00:24:02.075 { 00:24:02.075 "name": "ftl0", 00:24:02.075 "uuid": "b6778f2e-d435-43bc-a468-916c780de568" 00:24:02.075 } 00:24:02.075 04:08:44 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:24:02.075 04:08:44 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:24:02.335 04:08:44 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:24:02.335 04:08:44 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:02.335 [2024-12-07 04:08:45.029332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.335 [2024-12-07 04:08:45.029391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:02.335 [2024-12-07 04:08:45.029406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:02.335 [2024-12-07 04:08:45.029419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.335 [2024-12-07 04:08:45.029444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:02.335 [2024-12-07 04:08:45.033288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.335 [2024-12-07 04:08:45.033321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:02.335 [2024-12-07 04:08:45.033336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.828 ms 00:24:02.335 [2024-12-07 04:08:45.033346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.335 [2024-12-07 04:08:45.033599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.335 [2024-12-07 04:08:45.033614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:02.335 [2024-12-07 04:08:45.033645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:24:02.335 [2024-12-07 04:08:45.033657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.335 [2024-12-07 04:08:45.036190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.335 [2024-12-07 04:08:45.036217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:02.335 [2024-12-07 04:08:45.036231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.502 ms 00:24:02.335 [2024-12-07 04:08:45.036241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.335 [2024-12-07 04:08:45.041127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.335 [2024-12-07 04:08:45.041175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:02.335 [2024-12-07 04:08:45.041190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.869 ms 00:24:02.335 [2024-12-07 04:08:45.041200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.076340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.076380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:02.594 [2024-12-07 04:08:45.076397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.140 ms 00:24:02.594 [2024-12-07 04:08:45.076407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.097856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.097893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:02.594 [2024-12-07 04:08:45.097909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.434 ms 00:24:02.594 [2024-12-07 04:08:45.097920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.098072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.098087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:02.594 [2024-12-07 04:08:45.098101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:24:02.594 [2024-12-07 04:08:45.098113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.133985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.134023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:02.594 [2024-12-07 04:08:45.134039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.908 ms 00:24:02.594 [2024-12-07 04:08:45.134050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.169823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.169861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:02.594 [2024-12-07 04:08:45.169877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.785 ms 00:24:02.594 [2024-12-07 04:08:45.169887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.205479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.205515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:02.594 [2024-12-07 04:08:45.205530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.603 ms 00:24:02.594 [2024-12-07 04:08:45.205540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.240164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.594 [2024-12-07 04:08:45.240200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:02.594 [2024-12-07 04:08:45.240231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.583 ms 00:24:02.594 [2024-12-07 04:08:45.240241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.594 [2024-12-07 04:08:45.240284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:02.594 [2024-12-07 04:08:45.240325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:02.594 [2024-12-07 04:08:45.240568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.240997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:02.595 [2024-12-07 04:08:45.241626] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:02.595 [2024-12-07 04:08:45.241639] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b6778f2e-d435-43bc-a468-916c780de568 00:24:02.595 [2024-12-07 04:08:45.241651] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:02.595 [2024-12-07 04:08:45.241670] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:02.595 [2024-12-07 04:08:45.241680] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:02.595 [2024-12-07 04:08:45.241692] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:02.595 [2024-12-07 04:08:45.241702] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:02.595 [2024-12-07 04:08:45.241715] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:02.595 [2024-12-07 04:08:45.241726] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:02.595 [2024-12-07 04:08:45.241739] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:02.595 [2024-12-07 04:08:45.241749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:02.595 [2024-12-07 04:08:45.241761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.595 [2024-12-07 04:08:45.241771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:02.595 [2024-12-07 04:08:45.241785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:24:02.595 [2024-12-07 04:08:45.241797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.595 [2024-12-07 04:08:45.261361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.595 [2024-12-07 04:08:45.261396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:02.595 [2024-12-07 04:08:45.261428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.542 ms 00:24:02.595 [2024-12-07 04:08:45.261438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.595 [2024-12-07 04:08:45.262040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.595 [2024-12-07 04:08:45.262076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:02.595 [2024-12-07 04:08:45.262090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:24:02.595 [2024-12-07 04:08:45.262102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.595 [2024-12-07 04:08:45.326654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.595 [2024-12-07 04:08:45.326694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:02.595 [2024-12-07 04:08:45.326710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.595 [2024-12-07 04:08:45.326722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.595 [2024-12-07 04:08:45.326783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.595 [2024-12-07 04:08:45.326797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:02.595 [2024-12-07 04:08:45.326811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.595 [2024-12-07 04:08:45.326822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.595 [2024-12-07 04:08:45.326924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.595 [2024-12-07 04:08:45.326958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:02.595 [2024-12-07 04:08:45.326972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.595 [2024-12-07 04:08:45.326982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.595 [2024-12-07 04:08:45.327008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.595 [2024-12-07 04:08:45.327019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:02.595 [2024-12-07 04:08:45.327035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.595 [2024-12-07 04:08:45.327046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.450900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.450957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:02.853 [2024-12-07 04:08:45.450976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.450987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.549663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.549718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:02.853 [2024-12-07 04:08:45.549740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.549753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.549875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.549889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:02.853 [2024-12-07 04:08:45.549902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.549914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.549986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.550001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:02.853 [2024-12-07 04:08:45.550015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.550028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.550181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.550201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:02.853 [2024-12-07 04:08:45.550217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.550227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.550285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.550299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:02.853 [2024-12-07 04:08:45.550313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.550324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.550370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.550383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:02.853 [2024-12-07 04:08:45.550397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.550408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.550461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:02.853 [2024-12-07 04:08:45.550474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:02.853 [2024-12-07 04:08:45.550490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:02.853 [2024-12-07 04:08:45.550500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.853 [2024-12-07 04:08:45.550635] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.117 ms, result 0 00:24:02.853 true 00:24:02.853 04:08:45 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 78997 00:24:02.853 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78997 ']' 00:24:02.853 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78997 00:24:02.853 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78997 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.111 killing process with pid 78997 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78997' 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 78997 00:24:03.111 04:08:45 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 78997 00:24:06.397 04:08:48 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:24:10.659 262144+0 records in 00:24:10.659 262144+0 records out 00:24:10.660 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.9635 s, 271 MB/s 00:24:10.660 04:08:52 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:24:11.593 04:08:54 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:11.852 [2024-12-07 04:08:54.372880] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:24:11.852 [2024-12-07 04:08:54.373203] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79233 ] 00:24:11.852 [2024-12-07 04:08:54.560658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.111 [2024-12-07 04:08:54.671207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.370 [2024-12-07 04:08:55.033877] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:12.370 [2024-12-07 04:08:55.033951] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:12.630 [2024-12-07 04:08:55.199104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.199164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:12.630 [2024-12-07 04:08:55.199179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.630 [2024-12-07 04:08:55.199189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.199233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.199248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:12.630 [2024-12-07 04:08:55.199258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:12.630 [2024-12-07 04:08:55.199268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.199288] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:12.630 [2024-12-07 04:08:55.200251] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:12.630 [2024-12-07 04:08:55.200279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.200291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:12.630 [2024-12-07 04:08:55.200303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:24:12.630 [2024-12-07 04:08:55.200313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.201756] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:12.630 [2024-12-07 04:08:55.219944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.219978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:12.630 [2024-12-07 04:08:55.219992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.217 ms 00:24:12.630 [2024-12-07 04:08:55.220003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.220078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.220091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:12.630 [2024-12-07 04:08:55.220102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:12.630 [2024-12-07 04:08:55.220111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.226958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.226983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:12.630 [2024-12-07 04:08:55.226994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.788 ms 00:24:12.630 [2024-12-07 04:08:55.227012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.227083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.227096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:12.630 [2024-12-07 04:08:55.227107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:12.630 [2024-12-07 04:08:55.227116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.227153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.227165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:12.630 [2024-12-07 04:08:55.227175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:12.630 [2024-12-07 04:08:55.227185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.227211] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:12.630 [2024-12-07 04:08:55.231876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.231904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:12.630 [2024-12-07 04:08:55.231919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.677 ms 00:24:12.630 [2024-12-07 04:08:55.231936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.231969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.231980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:12.630 [2024-12-07 04:08:55.231990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:12.630 [2024-12-07 04:08:55.232000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.232047] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:12.630 [2024-12-07 04:08:55.232073] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:12.630 [2024-12-07 04:08:55.232106] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:12.630 [2024-12-07 04:08:55.232125] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:12.630 [2024-12-07 04:08:55.232208] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:12.630 [2024-12-07 04:08:55.232221] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:12.630 [2024-12-07 04:08:55.232234] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:12.630 [2024-12-07 04:08:55.232247] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232258] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232268] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:12.630 [2024-12-07 04:08:55.232278] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:12.630 [2024-12-07 04:08:55.232290] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:12.630 [2024-12-07 04:08:55.232299] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:12.630 [2024-12-07 04:08:55.232309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.232319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:12.630 [2024-12-07 04:08:55.232328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:24:12.630 [2024-12-07 04:08:55.232337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.232404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.630 [2024-12-07 04:08:55.232415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:12.630 [2024-12-07 04:08:55.232425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:24:12.630 [2024-12-07 04:08:55.232434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.630 [2024-12-07 04:08:55.232520] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:12.630 [2024-12-07 04:08:55.232536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:12.630 [2024-12-07 04:08:55.232546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:12.630 [2024-12-07 04:08:55.232577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:12.630 [2024-12-07 04:08:55.232606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.630 [2024-12-07 04:08:55.232625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:12.630 [2024-12-07 04:08:55.232635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:12.630 [2024-12-07 04:08:55.232644] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:12.630 [2024-12-07 04:08:55.232661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:12.630 [2024-12-07 04:08:55.232670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:12.630 [2024-12-07 04:08:55.232679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:12.630 [2024-12-07 04:08:55.232696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:12.630 [2024-12-07 04:08:55.232722] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:12.630 [2024-12-07 04:08:55.232748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:12.630 [2024-12-07 04:08:55.232773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:12.630 [2024-12-07 04:08:55.232799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:12.630 [2024-12-07 04:08:55.232823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.630 [2024-12-07 04:08:55.232840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:12.630 [2024-12-07 04:08:55.232849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:12.630 [2024-12-07 04:08:55.232858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:12.630 [2024-12-07 04:08:55.232866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:12.630 [2024-12-07 04:08:55.232874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:12.630 [2024-12-07 04:08:55.232882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:12.630 [2024-12-07 04:08:55.232900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:12.630 [2024-12-07 04:08:55.232909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232917] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:12.630 [2024-12-07 04:08:55.232938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:12.630 [2024-12-07 04:08:55.232949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:12.630 [2024-12-07 04:08:55.232958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:12.630 [2024-12-07 04:08:55.232967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:12.630 [2024-12-07 04:08:55.232987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:12.630 [2024-12-07 04:08:55.232997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:12.630 [2024-12-07 04:08:55.233006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:12.630 [2024-12-07 04:08:55.233014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:12.630 [2024-12-07 04:08:55.233023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:12.630 [2024-12-07 04:08:55.233033] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:12.630 [2024-12-07 04:08:55.233044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:12.630 [2024-12-07 04:08:55.233068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:12.630 [2024-12-07 04:08:55.233078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:12.630 [2024-12-07 04:08:55.233087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:12.630 [2024-12-07 04:08:55.233097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:12.630 [2024-12-07 04:08:55.233106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:12.630 [2024-12-07 04:08:55.233116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:12.630 [2024-12-07 04:08:55.233127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:12.630 [2024-12-07 04:08:55.233136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:12.630 [2024-12-07 04:08:55.233145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:12.630 [2024-12-07 04:08:55.233192] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:12.630 [2024-12-07 04:08:55.233203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233215] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:12.630 [2024-12-07 04:08:55.233225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:12.630 [2024-12-07 04:08:55.233235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:12.631 [2024-12-07 04:08:55.233245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:12.631 [2024-12-07 04:08:55.233255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.233264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:12.631 [2024-12-07 04:08:55.233274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:24:12.631 [2024-12-07 04:08:55.233283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.273544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.273577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:12.631 [2024-12-07 04:08:55.273590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.280 ms 00:24:12.631 [2024-12-07 04:08:55.273603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.273673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.273684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:12.631 [2024-12-07 04:08:55.273694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:24:12.631 [2024-12-07 04:08:55.273704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.332320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.332352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:12.631 [2024-12-07 04:08:55.332365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.658 ms 00:24:12.631 [2024-12-07 04:08:55.332375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.332406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.332417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:12.631 [2024-12-07 04:08:55.332434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:12.631 [2024-12-07 04:08:55.332444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.332961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.332982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:12.631 [2024-12-07 04:08:55.332993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:24:12.631 [2024-12-07 04:08:55.333002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.333123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.333139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:12.631 [2024-12-07 04:08:55.333168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:24:12.631 [2024-12-07 04:08:55.333179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.631 [2024-12-07 04:08:55.353405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.631 [2024-12-07 04:08:55.353438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:12.631 [2024-12-07 04:08:55.353451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.238 ms 00:24:12.631 [2024-12-07 04:08:55.353461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.372895] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:12.889 [2024-12-07 04:08:55.372943] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:12.889 [2024-12-07 04:08:55.372958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.372968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:12.889 [2024-12-07 04:08:55.372979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.436 ms 00:24:12.889 [2024-12-07 04:08:55.372988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.401201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.401248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:12.889 [2024-12-07 04:08:55.401261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.216 ms 00:24:12.889 [2024-12-07 04:08:55.401271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.418889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.418923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:12.889 [2024-12-07 04:08:55.418942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.602 ms 00:24:12.889 [2024-12-07 04:08:55.418952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.436106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.436138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:12.889 [2024-12-07 04:08:55.436151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.143 ms 00:24:12.889 [2024-12-07 04:08:55.436161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.436892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.436917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:12.889 [2024-12-07 04:08:55.436945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:24:12.889 [2024-12-07 04:08:55.436963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.520674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.520728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:12.889 [2024-12-07 04:08:55.520744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.807 ms 00:24:12.889 [2024-12-07 04:08:55.520760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.530819] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:12.889 [2024-12-07 04:08:55.533079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.533107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:12.889 [2024-12-07 04:08:55.533119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.293 ms 00:24:12.889 [2024-12-07 04:08:55.533130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.533199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.533212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:12.889 [2024-12-07 04:08:55.533223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:12.889 [2024-12-07 04:08:55.533233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.533309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.533321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:12.889 [2024-12-07 04:08:55.533332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:12.889 [2024-12-07 04:08:55.533342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.533361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.533371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:12.889 [2024-12-07 04:08:55.533381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:12.889 [2024-12-07 04:08:55.533390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.533424] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:12.889 [2024-12-07 04:08:55.533439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.533449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:12.889 [2024-12-07 04:08:55.533459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:12.889 [2024-12-07 04:08:55.533469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.567821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.567856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:12.889 [2024-12-07 04:08:55.567869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.388 ms 00:24:12.889 [2024-12-07 04:08:55.567885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.567961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:12.889 [2024-12-07 04:08:55.567974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:12.889 [2024-12-07 04:08:55.567984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:12.889 [2024-12-07 04:08:55.567994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:12.889 [2024-12-07 04:08:55.569143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 370.154 ms, result 0 00:24:14.265  [2024-12-07T04:08:57.936Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-07T04:08:58.873Z] Copying: 46/1024 [MB] (23 MBps) [2024-12-07T04:08:59.810Z] Copying: 70/1024 [MB] (23 MBps) [2024-12-07T04:09:00.748Z] Copying: 94/1024 [MB] (24 MBps) [2024-12-07T04:09:01.687Z] Copying: 118/1024 [MB] (23 MBps) [2024-12-07T04:09:02.625Z] Copying: 142/1024 [MB] (24 MBps) [2024-12-07T04:09:04.004Z] Copying: 166/1024 [MB] (23 MBps) [2024-12-07T04:09:04.572Z] Copying: 189/1024 [MB] (23 MBps) [2024-12-07T04:09:05.949Z] Copying: 212/1024 [MB] (22 MBps) [2024-12-07T04:09:06.886Z] Copying: 235/1024 [MB] (23 MBps) [2024-12-07T04:09:07.825Z] Copying: 260/1024 [MB] (24 MBps) [2024-12-07T04:09:08.763Z] Copying: 285/1024 [MB] (25 MBps) [2024-12-07T04:09:09.700Z] Copying: 309/1024 [MB] (23 MBps) [2024-12-07T04:09:10.634Z] Copying: 334/1024 [MB] (25 MBps) [2024-12-07T04:09:11.571Z] Copying: 359/1024 [MB] (24 MBps) [2024-12-07T04:09:12.949Z] Copying: 384/1024 [MB] (24 MBps) [2024-12-07T04:09:13.885Z] Copying: 409/1024 [MB] (24 MBps) [2024-12-07T04:09:14.822Z] Copying: 432/1024 [MB] (22 MBps) [2024-12-07T04:09:15.761Z] Copying: 456/1024 [MB] (24 MBps) [2024-12-07T04:09:16.700Z] Copying: 480/1024 [MB] (24 MBps) [2024-12-07T04:09:17.637Z] Copying: 505/1024 [MB] (24 MBps) [2024-12-07T04:09:18.575Z] Copying: 529/1024 [MB] (24 MBps) [2024-12-07T04:09:19.598Z] Copying: 554/1024 [MB] (24 MBps) [2024-12-07T04:09:20.548Z] Copying: 579/1024 [MB] (24 MBps) [2024-12-07T04:09:21.928Z] Copying: 604/1024 [MB] (24 MBps) [2024-12-07T04:09:22.865Z] Copying: 628/1024 [MB] (23 MBps) [2024-12-07T04:09:23.801Z] Copying: 651/1024 [MB] (23 MBps) [2024-12-07T04:09:24.739Z] Copying: 675/1024 [MB] (23 MBps) [2024-12-07T04:09:25.673Z] Copying: 699/1024 [MB] (24 MBps) [2024-12-07T04:09:26.609Z] Copying: 723/1024 [MB] (24 MBps) [2024-12-07T04:09:27.550Z] Copying: 748/1024 [MB] (24 MBps) [2024-12-07T04:09:28.929Z] Copying: 773/1024 [MB] (24 MBps) [2024-12-07T04:09:29.867Z] Copying: 798/1024 [MB] (25 MBps) [2024-12-07T04:09:30.806Z] Copying: 824/1024 [MB] (26 MBps) [2024-12-07T04:09:31.744Z] Copying: 849/1024 [MB] (25 MBps) [2024-12-07T04:09:32.680Z] Copying: 875/1024 [MB] (25 MBps) [2024-12-07T04:09:33.614Z] Copying: 899/1024 [MB] (24 MBps) [2024-12-07T04:09:34.549Z] Copying: 922/1024 [MB] (23 MBps) [2024-12-07T04:09:35.920Z] Copying: 947/1024 [MB] (24 MBps) [2024-12-07T04:09:36.853Z] Copying: 971/1024 [MB] (24 MBps) [2024-12-07T04:09:37.786Z] Copying: 997/1024 [MB] (25 MBps) [2024-12-07T04:09:37.786Z] Copying: 1022/1024 [MB] (25 MBps) [2024-12-07T04:09:37.786Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-07 04:09:37.577615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.577678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:55.050 [2024-12-07 04:09:37.577701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:55.050 [2024-12-07 04:09:37.577715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.577738] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:55.050 [2024-12-07 04:09:37.581872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.581910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:55.050 [2024-12-07 04:09:37.581937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.123 ms 00:24:55.050 [2024-12-07 04:09:37.581946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.583835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.583876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:55.050 [2024-12-07 04:09:37.583889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.865 ms 00:24:55.050 [2024-12-07 04:09:37.583899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.601504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.601542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:55.050 [2024-12-07 04:09:37.601571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.616 ms 00:24:55.050 [2024-12-07 04:09:37.601581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.606377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.606409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:55.050 [2024-12-07 04:09:37.606421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.764 ms 00:24:55.050 [2024-12-07 04:09:37.606447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.641523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.641559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:55.050 [2024-12-07 04:09:37.641571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.074 ms 00:24:55.050 [2024-12-07 04:09:37.641581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.661434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.661469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:55.050 [2024-12-07 04:09:37.661482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.848 ms 00:24:55.050 [2024-12-07 04:09:37.661492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.661611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.661627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:55.050 [2024-12-07 04:09:37.661637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:55.050 [2024-12-07 04:09:37.661646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.696537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.696572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:55.050 [2024-12-07 04:09:37.696583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.932 ms 00:24:55.050 [2024-12-07 04:09:37.696592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.730921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.730967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:55.050 [2024-12-07 04:09:37.730995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.349 ms 00:24:55.050 [2024-12-07 04:09:37.731004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.050 [2024-12-07 04:09:37.764226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.050 [2024-12-07 04:09:37.764261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:55.050 [2024-12-07 04:09:37.764273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.238 ms 00:24:55.050 [2024-12-07 04:09:37.764282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.311 [2024-12-07 04:09:37.798679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.311 [2024-12-07 04:09:37.798717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:55.311 [2024-12-07 04:09:37.798745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.383 ms 00:24:55.311 [2024-12-07 04:09:37.798755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.311 [2024-12-07 04:09:37.798796] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:55.311 [2024-12-07 04:09:37.798812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.798994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:55.311 [2024-12-07 04:09:37.799115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:55.312 [2024-12-07 04:09:37.799889] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:55.312 [2024-12-07 04:09:37.799902] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b6778f2e-d435-43bc-a468-916c780de568 00:24:55.312 [2024-12-07 04:09:37.799913] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:55.312 [2024-12-07 04:09:37.799922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:55.312 [2024-12-07 04:09:37.799931] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:55.312 [2024-12-07 04:09:37.799941] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:55.312 [2024-12-07 04:09:37.799958] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:55.312 [2024-12-07 04:09:37.799978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:55.312 [2024-12-07 04:09:37.799987] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:55.312 [2024-12-07 04:09:37.799996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:55.312 [2024-12-07 04:09:37.800005] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:55.312 [2024-12-07 04:09:37.800015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.312 [2024-12-07 04:09:37.800025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:55.312 [2024-12-07 04:09:37.800035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:24:55.312 [2024-12-07 04:09:37.800045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.312 [2024-12-07 04:09:37.819555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.312 [2024-12-07 04:09:37.819586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:55.312 [2024-12-07 04:09:37.819598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.503 ms 00:24:55.312 [2024-12-07 04:09:37.819607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.312 [2024-12-07 04:09:37.820200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:55.312 [2024-12-07 04:09:37.820219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:55.312 [2024-12-07 04:09:37.820230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.572 ms 00:24:55.312 [2024-12-07 04:09:37.820246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.313 [2024-12-07 04:09:37.868834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.313 [2024-12-07 04:09:37.868870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:55.313 [2024-12-07 04:09:37.868899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.313 [2024-12-07 04:09:37.868909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.313 [2024-12-07 04:09:37.868972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.313 [2024-12-07 04:09:37.868984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:55.313 [2024-12-07 04:09:37.868995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.313 [2024-12-07 04:09:37.869008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.313 [2024-12-07 04:09:37.869066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.313 [2024-12-07 04:09:37.869079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:55.313 [2024-12-07 04:09:37.869090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.313 [2024-12-07 04:09:37.869100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.313 [2024-12-07 04:09:37.869115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.313 [2024-12-07 04:09:37.869126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:55.313 [2024-12-07 04:09:37.869135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.313 [2024-12-07 04:09:37.869144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.313 [2024-12-07 04:09:37.985574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.313 [2024-12-07 04:09:37.985629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:55.313 [2024-12-07 04:09:37.985643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.313 [2024-12-07 04:09:37.985653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.572 [2024-12-07 04:09:38.081890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.572 [2024-12-07 04:09:38.081968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:55.572 [2024-12-07 04:09:38.081983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.572 [2024-12-07 04:09:38.081999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.572 [2024-12-07 04:09:38.082086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.572 [2024-12-07 04:09:38.082098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:55.572 [2024-12-07 04:09:38.082109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.572 [2024-12-07 04:09:38.082118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.572 [2024-12-07 04:09:38.082155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.572 [2024-12-07 04:09:38.082165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:55.572 [2024-12-07 04:09:38.082176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.572 [2024-12-07 04:09:38.082185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.572 [2024-12-07 04:09:38.082316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.572 [2024-12-07 04:09:38.082329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:55.572 [2024-12-07 04:09:38.082339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.572 [2024-12-07 04:09:38.082349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.572 [2024-12-07 04:09:38.082402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.572 [2024-12-07 04:09:38.082413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:55.572 [2024-12-07 04:09:38.082424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.572 [2024-12-07 04:09:38.082434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.572 [2024-12-07 04:09:38.082471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.573 [2024-12-07 04:09:38.082487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:55.573 [2024-12-07 04:09:38.082497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.573 [2024-12-07 04:09:38.082507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.573 [2024-12-07 04:09:38.082550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:55.573 [2024-12-07 04:09:38.082561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:55.573 [2024-12-07 04:09:38.082572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:55.573 [2024-12-07 04:09:38.082581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:55.573 [2024-12-07 04:09:38.082743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 505.895 ms, result 0 00:24:56.510 00:24:56.510 00:24:56.770 04:09:39 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:24:56.770 [2024-12-07 04:09:39.356987] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:24:56.770 [2024-12-07 04:09:39.357116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79690 ] 00:24:57.030 [2024-12-07 04:09:39.536290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.030 [2024-12-07 04:09:39.642234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.289 [2024-12-07 04:09:40.008828] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:57.289 [2024-12-07 04:09:40.008901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:57.551 [2024-12-07 04:09:40.169185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.169239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:57.551 [2024-12-07 04:09:40.169254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:57.551 [2024-12-07 04:09:40.169264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.169325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.169340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:57.551 [2024-12-07 04:09:40.169351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:24:57.551 [2024-12-07 04:09:40.169362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.169382] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:57.551 [2024-12-07 04:09:40.170403] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:57.551 [2024-12-07 04:09:40.170443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.170454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:57.551 [2024-12-07 04:09:40.170466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:24:57.551 [2024-12-07 04:09:40.170476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.171939] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:57.551 [2024-12-07 04:09:40.190680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.190716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:57.551 [2024-12-07 04:09:40.190730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.783 ms 00:24:57.551 [2024-12-07 04:09:40.190740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.190822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.190835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:57.551 [2024-12-07 04:09:40.190846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:24:57.551 [2024-12-07 04:09:40.190856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.197744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.197773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:57.551 [2024-12-07 04:09:40.197784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.830 ms 00:24:57.551 [2024-12-07 04:09:40.197797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.197888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.197900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:57.551 [2024-12-07 04:09:40.197911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:57.551 [2024-12-07 04:09:40.197921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.197970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.197983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:57.551 [2024-12-07 04:09:40.197994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:57.551 [2024-12-07 04:09:40.198003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.198030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:57.551 [2024-12-07 04:09:40.202810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.202842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:57.551 [2024-12-07 04:09:40.202858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.792 ms 00:24:57.551 [2024-12-07 04:09:40.202869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.202902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.202914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:57.551 [2024-12-07 04:09:40.202924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:57.551 [2024-12-07 04:09:40.202943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.202997] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:57.551 [2024-12-07 04:09:40.203023] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:57.551 [2024-12-07 04:09:40.203057] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:57.551 [2024-12-07 04:09:40.203078] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:57.551 [2024-12-07 04:09:40.203166] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:57.551 [2024-12-07 04:09:40.203180] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:57.551 [2024-12-07 04:09:40.203194] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:57.551 [2024-12-07 04:09:40.203206] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203219] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203230] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:57.551 [2024-12-07 04:09:40.203241] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:57.551 [2024-12-07 04:09:40.203254] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:57.551 [2024-12-07 04:09:40.203265] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:57.551 [2024-12-07 04:09:40.203275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.203285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:57.551 [2024-12-07 04:09:40.203296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:24:57.551 [2024-12-07 04:09:40.203306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.203380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.551 [2024-12-07 04:09:40.203391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:57.551 [2024-12-07 04:09:40.203401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:57.551 [2024-12-07 04:09:40.203411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.551 [2024-12-07 04:09:40.203506] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:57.551 [2024-12-07 04:09:40.203529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:57.551 [2024-12-07 04:09:40.203540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:57.551 [2024-12-07 04:09:40.203570] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:57.551 [2024-12-07 04:09:40.203599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.551 [2024-12-07 04:09:40.203618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:57.551 [2024-12-07 04:09:40.203630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:57.551 [2024-12-07 04:09:40.203639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:57.551 [2024-12-07 04:09:40.203659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:57.551 [2024-12-07 04:09:40.203669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:57.551 [2024-12-07 04:09:40.203678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:57.551 [2024-12-07 04:09:40.203697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:57.551 [2024-12-07 04:09:40.203724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:57.551 [2024-12-07 04:09:40.203752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:57.551 [2024-12-07 04:09:40.203780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:57.551 [2024-12-07 04:09:40.203807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:57.551 [2024-12-07 04:09:40.203824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:57.551 [2024-12-07 04:09:40.203833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:57.551 [2024-12-07 04:09:40.203842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.551 [2024-12-07 04:09:40.203851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:57.551 [2024-12-07 04:09:40.203860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:57.552 [2024-12-07 04:09:40.203869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:57.552 [2024-12-07 04:09:40.203878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:57.552 [2024-12-07 04:09:40.203887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:57.552 [2024-12-07 04:09:40.203896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.552 [2024-12-07 04:09:40.203904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:57.552 [2024-12-07 04:09:40.203914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:57.552 [2024-12-07 04:09:40.203922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.552 [2024-12-07 04:09:40.203956] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:57.552 [2024-12-07 04:09:40.203967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:57.552 [2024-12-07 04:09:40.203977] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:57.552 [2024-12-07 04:09:40.203987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:57.552 [2024-12-07 04:09:40.203997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:57.552 [2024-12-07 04:09:40.204006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:57.552 [2024-12-07 04:09:40.204015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:57.552 [2024-12-07 04:09:40.204024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:57.552 [2024-12-07 04:09:40.204033] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:57.552 [2024-12-07 04:09:40.204042] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:57.552 [2024-12-07 04:09:40.204053] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:57.552 [2024-12-07 04:09:40.204065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:57.552 [2024-12-07 04:09:40.204091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:57.552 [2024-12-07 04:09:40.204101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:57.552 [2024-12-07 04:09:40.204112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:57.552 [2024-12-07 04:09:40.204122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:57.552 [2024-12-07 04:09:40.204132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:57.552 [2024-12-07 04:09:40.204143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:57.552 [2024-12-07 04:09:40.204153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:57.552 [2024-12-07 04:09:40.204163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:57.552 [2024-12-07 04:09:40.204173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:57.552 [2024-12-07 04:09:40.204223] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:57.552 [2024-12-07 04:09:40.204234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204245] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:57.552 [2024-12-07 04:09:40.204256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:57.552 [2024-12-07 04:09:40.204266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:57.552 [2024-12-07 04:09:40.204276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:57.552 [2024-12-07 04:09:40.204287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.552 [2024-12-07 04:09:40.204298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:57.552 [2024-12-07 04:09:40.204308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.837 ms 00:24:57.552 [2024-12-07 04:09:40.204317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.552 [2024-12-07 04:09:40.244938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.552 [2024-12-07 04:09:40.244975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:57.552 [2024-12-07 04:09:40.245005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.638 ms 00:24:57.552 [2024-12-07 04:09:40.245019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.552 [2024-12-07 04:09:40.245093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.552 [2024-12-07 04:09:40.245105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:57.552 [2024-12-07 04:09:40.245116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:57.552 [2024-12-07 04:09:40.245126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.315367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.315407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:57.813 [2024-12-07 04:09:40.315421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.295 ms 00:24:57.813 [2024-12-07 04:09:40.315432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.315468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.315480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:57.813 [2024-12-07 04:09:40.315495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:57.813 [2024-12-07 04:09:40.315505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.316008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.316031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:57.813 [2024-12-07 04:09:40.316042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:24:57.813 [2024-12-07 04:09:40.316052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.316168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.316182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:57.813 [2024-12-07 04:09:40.316199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:24:57.813 [2024-12-07 04:09:40.316209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.334997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.335034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:57.813 [2024-12-07 04:09:40.335048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.796 ms 00:24:57.813 [2024-12-07 04:09:40.335058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.353135] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:57.813 [2024-12-07 04:09:40.353173] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:57.813 [2024-12-07 04:09:40.353204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.353214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:57.813 [2024-12-07 04:09:40.353225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.058 ms 00:24:57.813 [2024-12-07 04:09:40.353235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.381267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.381316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:57.813 [2024-12-07 04:09:40.381329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.035 ms 00:24:57.813 [2024-12-07 04:09:40.381339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.398785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.398822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:57.813 [2024-12-07 04:09:40.398850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.401 ms 00:24:57.813 [2024-12-07 04:09:40.398860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.416031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.416065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:57.813 [2024-12-07 04:09:40.416077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.161 ms 00:24:57.813 [2024-12-07 04:09:40.416086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.416840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.416874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:57.813 [2024-12-07 04:09:40.416890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.632 ms 00:24:57.813 [2024-12-07 04:09:40.416899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.497799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.497877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:57.813 [2024-12-07 04:09:40.497899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.008 ms 00:24:57.813 [2024-12-07 04:09:40.497910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.508219] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:57.813 [2024-12-07 04:09:40.510807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.510837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:57.813 [2024-12-07 04:09:40.510849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.859 ms 00:24:57.813 [2024-12-07 04:09:40.510860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.510960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.510975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:57.813 [2024-12-07 04:09:40.510990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:57.813 [2024-12-07 04:09:40.511001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.511077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.511090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:57.813 [2024-12-07 04:09:40.511101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:24:57.813 [2024-12-07 04:09:40.511111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.511135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.511146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:57.813 [2024-12-07 04:09:40.511156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:57.813 [2024-12-07 04:09:40.511167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.511201] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:57.813 [2024-12-07 04:09:40.511213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.511223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:57.813 [2024-12-07 04:09:40.511233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:57.813 [2024-12-07 04:09:40.511243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.546539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.546578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:57.813 [2024-12-07 04:09:40.546599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.331 ms 00:24:57.813 [2024-12-07 04:09:40.546609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.813 [2024-12-07 04:09:40.546680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:57.813 [2024-12-07 04:09:40.546692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:57.814 [2024-12-07 04:09:40.546703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:57.814 [2024-12-07 04:09:40.546714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:57.814 [2024-12-07 04:09:40.547783] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.789 ms, result 0 00:24:59.190  [2024-12-07T04:09:42.865Z] Copying: 26/1024 [MB] (26 MBps) [2024-12-07T04:09:43.805Z] Copying: 52/1024 [MB] (25 MBps) [2024-12-07T04:09:45.182Z] Copying: 79/1024 [MB] (26 MBps) [2024-12-07T04:09:45.748Z] Copying: 106/1024 [MB] (26 MBps) [2024-12-07T04:09:47.126Z] Copying: 133/1024 [MB] (27 MBps) [2024-12-07T04:09:48.072Z] Copying: 160/1024 [MB] (27 MBps) [2024-12-07T04:09:49.081Z] Copying: 186/1024 [MB] (25 MBps) [2024-12-07T04:09:50.018Z] Copying: 212/1024 [MB] (26 MBps) [2024-12-07T04:09:50.952Z] Copying: 239/1024 [MB] (26 MBps) [2024-12-07T04:09:51.891Z] Copying: 265/1024 [MB] (25 MBps) [2024-12-07T04:09:52.825Z] Copying: 291/1024 [MB] (26 MBps) [2024-12-07T04:09:53.760Z] Copying: 317/1024 [MB] (26 MBps) [2024-12-07T04:09:55.138Z] Copying: 344/1024 [MB] (26 MBps) [2024-12-07T04:09:56.075Z] Copying: 370/1024 [MB] (25 MBps) [2024-12-07T04:09:57.013Z] Copying: 395/1024 [MB] (25 MBps) [2024-12-07T04:09:57.951Z] Copying: 421/1024 [MB] (25 MBps) [2024-12-07T04:09:58.890Z] Copying: 447/1024 [MB] (26 MBps) [2024-12-07T04:09:59.829Z] Copying: 473/1024 [MB] (26 MBps) [2024-12-07T04:10:00.767Z] Copying: 499/1024 [MB] (25 MBps) [2024-12-07T04:10:02.140Z] Copying: 526/1024 [MB] (26 MBps) [2024-12-07T04:10:03.078Z] Copying: 553/1024 [MB] (27 MBps) [2024-12-07T04:10:04.015Z] Copying: 579/1024 [MB] (26 MBps) [2024-12-07T04:10:04.952Z] Copying: 605/1024 [MB] (26 MBps) [2024-12-07T04:10:05.890Z] Copying: 632/1024 [MB] (26 MBps) [2024-12-07T04:10:06.833Z] Copying: 657/1024 [MB] (25 MBps) [2024-12-07T04:10:07.772Z] Copying: 683/1024 [MB] (25 MBps) [2024-12-07T04:10:09.153Z] Copying: 708/1024 [MB] (25 MBps) [2024-12-07T04:10:09.721Z] Copying: 734/1024 [MB] (25 MBps) [2024-12-07T04:10:11.099Z] Copying: 760/1024 [MB] (25 MBps) [2024-12-07T04:10:12.033Z] Copying: 785/1024 [MB] (25 MBps) [2024-12-07T04:10:12.970Z] Copying: 811/1024 [MB] (26 MBps) [2024-12-07T04:10:13.963Z] Copying: 837/1024 [MB] (26 MBps) [2024-12-07T04:10:14.896Z] Copying: 864/1024 [MB] (26 MBps) [2024-12-07T04:10:15.830Z] Copying: 889/1024 [MB] (24 MBps) [2024-12-07T04:10:16.767Z] Copying: 915/1024 [MB] (25 MBps) [2024-12-07T04:10:17.751Z] Copying: 942/1024 [MB] (27 MBps) [2024-12-07T04:10:19.127Z] Copying: 968/1024 [MB] (26 MBps) [2024-12-07T04:10:20.061Z] Copying: 993/1024 [MB] (24 MBps) [2024-12-07T04:10:20.061Z] Copying: 1019/1024 [MB] (26 MBps) [2024-12-07T04:10:20.061Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-07 04:10:19.933115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:19.933180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:37.325 [2024-12-07 04:10:19.933202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:37.325 [2024-12-07 04:10:19.933216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:19.933247] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:37.325 [2024-12-07 04:10:19.939571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:19.939759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:37.325 [2024-12-07 04:10:19.939869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.308 ms 00:25:37.325 [2024-12-07 04:10:19.939918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:19.940349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:19.940422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:37.325 [2024-12-07 04:10:19.940654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:25:37.325 [2024-12-07 04:10:19.940704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:19.944029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:19.944171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:37.325 [2024-12-07 04:10:19.944255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.275 ms 00:25:37.325 [2024-12-07 04:10:19.944277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:19.949214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:19.949244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:37.325 [2024-12-07 04:10:19.949255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:25:37.325 [2024-12-07 04:10:19.949265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:19.985194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:19.985228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:37.325 [2024-12-07 04:10:19.985257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.922 ms 00:25:37.325 [2024-12-07 04:10:19.985267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:20.006101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:20.006138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:37.325 [2024-12-07 04:10:20.006152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.828 ms 00:25:37.325 [2024-12-07 04:10:20.006163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:20.006309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:20.006323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:37.325 [2024-12-07 04:10:20.006334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:25:37.325 [2024-12-07 04:10:20.006345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.325 [2024-12-07 04:10:20.042117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.325 [2024-12-07 04:10:20.042149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:37.325 [2024-12-07 04:10:20.042161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.814 ms 00:25:37.325 [2024-12-07 04:10:20.042170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.584 [2024-12-07 04:10:20.076947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.584 [2024-12-07 04:10:20.076978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:37.584 [2024-12-07 04:10:20.076991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.796 ms 00:25:37.584 [2024-12-07 04:10:20.077000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.584 [2024-12-07 04:10:20.110492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.584 [2024-12-07 04:10:20.110523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:37.584 [2024-12-07 04:10:20.110535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.508 ms 00:25:37.584 [2024-12-07 04:10:20.110544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.584 [2024-12-07 04:10:20.144500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.584 [2024-12-07 04:10:20.144531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:37.584 [2024-12-07 04:10:20.144543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.938 ms 00:25:37.584 [2024-12-07 04:10:20.144552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.584 [2024-12-07 04:10:20.144587] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:37.584 [2024-12-07 04:10:20.144608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:37.584 [2024-12-07 04:10:20.144694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.144998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:37.585 [2024-12-07 04:10:20.145535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:37.586 [2024-12-07 04:10:20.145699] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:37.586 [2024-12-07 04:10:20.145709] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b6778f2e-d435-43bc-a468-916c780de568 00:25:37.586 [2024-12-07 04:10:20.145720] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:37.586 [2024-12-07 04:10:20.145735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:37.586 [2024-12-07 04:10:20.145744] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:37.586 [2024-12-07 04:10:20.145755] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:37.586 [2024-12-07 04:10:20.145775] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:37.586 [2024-12-07 04:10:20.145785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:37.586 [2024-12-07 04:10:20.145795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:37.586 [2024-12-07 04:10:20.145804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:37.586 [2024-12-07 04:10:20.145814] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:37.586 [2024-12-07 04:10:20.145824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.586 [2024-12-07 04:10:20.145834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:37.586 [2024-12-07 04:10:20.145844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.240 ms 00:25:37.586 [2024-12-07 04:10:20.145858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.586 [2024-12-07 04:10:20.164918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.586 [2024-12-07 04:10:20.164957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:37.586 [2024-12-07 04:10:20.164969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.042 ms 00:25:37.586 [2024-12-07 04:10:20.164979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.586 [2024-12-07 04:10:20.165479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.586 [2024-12-07 04:10:20.165503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:37.586 [2024-12-07 04:10:20.165517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.465 ms 00:25:37.586 [2024-12-07 04:10:20.165527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.586 [2024-12-07 04:10:20.214038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.586 [2024-12-07 04:10:20.214069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:37.586 [2024-12-07 04:10:20.214081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.586 [2024-12-07 04:10:20.214107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.586 [2024-12-07 04:10:20.214159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.586 [2024-12-07 04:10:20.214169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:37.586 [2024-12-07 04:10:20.214184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.586 [2024-12-07 04:10:20.214193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.586 [2024-12-07 04:10:20.214250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.586 [2024-12-07 04:10:20.214270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:37.586 [2024-12-07 04:10:20.214280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.586 [2024-12-07 04:10:20.214290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.586 [2024-12-07 04:10:20.214307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.586 [2024-12-07 04:10:20.214317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:37.586 [2024-12-07 04:10:20.214327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.586 [2024-12-07 04:10:20.214341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.330847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.330894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:37.845 [2024-12-07 04:10:20.330924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.330935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.425415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.425462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:37.845 [2024-12-07 04:10:20.425480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.425490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.425575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.425587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.845 [2024-12-07 04:10:20.425598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.425607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.425643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.425653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.845 [2024-12-07 04:10:20.425663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.425672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.425778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.425790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.845 [2024-12-07 04:10:20.425801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.425810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.425842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.425854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:37.845 [2024-12-07 04:10:20.425864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.425873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.425913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.425923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.845 [2024-12-07 04:10:20.425952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.425961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.426020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:37.845 [2024-12-07 04:10:20.426031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.845 [2024-12-07 04:10:20.426041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:37.845 [2024-12-07 04:10:20.426051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.845 [2024-12-07 04:10:20.426174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 493.828 ms, result 0 00:25:38.780 00:25:38.780 00:25:38.780 04:10:21 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:40.680 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:40.680 04:10:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:25:40.680 [2024-12-07 04:10:23.162305] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:25:40.680 [2024-12-07 04:10:23.162433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80133 ] 00:25:40.680 [2024-12-07 04:10:23.340723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.938 [2024-12-07 04:10:23.449758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:41.196 [2024-12-07 04:10:23.807115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.196 [2024-12-07 04:10:23.807206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:41.455 [2024-12-07 04:10:23.966648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.966701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:41.455 [2024-12-07 04:10:23.966734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:41.455 [2024-12-07 04:10:23.966744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.966791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.966806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:41.455 [2024-12-07 04:10:23.966817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:41.455 [2024-12-07 04:10:23.966827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.966847] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:41.455 [2024-12-07 04:10:23.967859] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:41.455 [2024-12-07 04:10:23.967891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.967901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:41.455 [2024-12-07 04:10:23.967913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.049 ms 00:25:41.455 [2024-12-07 04:10:23.967923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.969397] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:41.455 [2024-12-07 04:10:23.987490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.987527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:41.455 [2024-12-07 04:10:23.987540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.123 ms 00:25:41.455 [2024-12-07 04:10:23.987550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.987640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.987653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:41.455 [2024-12-07 04:10:23.987665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:41.455 [2024-12-07 04:10:23.987676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.994598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.994627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:41.455 [2024-12-07 04:10:23.994637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.862 ms 00:25:41.455 [2024-12-07 04:10:23.994650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.994741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.994755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:41.455 [2024-12-07 04:10:23.994765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:25:41.455 [2024-12-07 04:10:23.994775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.994814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.994826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:41.455 [2024-12-07 04:10:23.994836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:41.455 [2024-12-07 04:10:23.994845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.994872] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:41.455 [2024-12-07 04:10:23.999621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.999654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:41.455 [2024-12-07 04:10:23.999669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.762 ms 00:25:41.455 [2024-12-07 04:10:23.999679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.999712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:23.999723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:41.455 [2024-12-07 04:10:23.999734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:41.455 [2024-12-07 04:10:23.999744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:23.999795] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:41.455 [2024-12-07 04:10:23.999820] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:41.455 [2024-12-07 04:10:23.999854] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:41.455 [2024-12-07 04:10:23.999874] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:41.455 [2024-12-07 04:10:23.999992] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:41.455 [2024-12-07 04:10:24.000007] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:41.455 [2024-12-07 04:10:24.000021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:41.455 [2024-12-07 04:10:24.000033] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:41.455 [2024-12-07 04:10:24.000046] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:41.455 [2024-12-07 04:10:24.000057] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:41.455 [2024-12-07 04:10:24.000068] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:41.455 [2024-12-07 04:10:24.000081] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:41.455 [2024-12-07 04:10:24.000091] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:41.455 [2024-12-07 04:10:24.000101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:24.000111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:41.455 [2024-12-07 04:10:24.000122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:41.455 [2024-12-07 04:10:24.000132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:24.000204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.455 [2024-12-07 04:10:24.000215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:41.455 [2024-12-07 04:10:24.000225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:41.455 [2024-12-07 04:10:24.000235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.455 [2024-12-07 04:10:24.000331] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:41.455 [2024-12-07 04:10:24.000346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:41.455 [2024-12-07 04:10:24.000357] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.455 [2024-12-07 04:10:24.000367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.455 [2024-12-07 04:10:24.000377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:41.455 [2024-12-07 04:10:24.000386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:41.455 [2024-12-07 04:10:24.000396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:41.455 [2024-12-07 04:10:24.000405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:41.455 [2024-12-07 04:10:24.000415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.456 [2024-12-07 04:10:24.000433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:41.456 [2024-12-07 04:10:24.000443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:41.456 [2024-12-07 04:10:24.000452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:41.456 [2024-12-07 04:10:24.000470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:41.456 [2024-12-07 04:10:24.000480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:41.456 [2024-12-07 04:10:24.000489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:41.456 [2024-12-07 04:10:24.000508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:41.456 [2024-12-07 04:10:24.000536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:41.456 [2024-12-07 04:10:24.000563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:41.456 [2024-12-07 04:10:24.000591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:41.456 [2024-12-07 04:10:24.000619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:41.456 [2024-12-07 04:10:24.000646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.456 [2024-12-07 04:10:24.000664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:41.456 [2024-12-07 04:10:24.000673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:41.456 [2024-12-07 04:10:24.000682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:41.456 [2024-12-07 04:10:24.000691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:41.456 [2024-12-07 04:10:24.000701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:41.456 [2024-12-07 04:10:24.000709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:41.456 [2024-12-07 04:10:24.000727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:41.456 [2024-12-07 04:10:24.000737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000746] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:41.456 [2024-12-07 04:10:24.000756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:41.456 [2024-12-07 04:10:24.000765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:41.456 [2024-12-07 04:10:24.000784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:41.456 [2024-12-07 04:10:24.000793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:41.456 [2024-12-07 04:10:24.000802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:41.456 [2024-12-07 04:10:24.000811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:41.456 [2024-12-07 04:10:24.000820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:41.456 [2024-12-07 04:10:24.000829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:41.456 [2024-12-07 04:10:24.000840] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:41.456 [2024-12-07 04:10:24.000852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.000867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:41.456 [2024-12-07 04:10:24.000877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:41.456 [2024-12-07 04:10:24.000887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:41.456 [2024-12-07 04:10:24.000897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:41.456 [2024-12-07 04:10:24.000907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:41.456 [2024-12-07 04:10:24.000918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:41.456 [2024-12-07 04:10:24.000941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:41.456 [2024-12-07 04:10:24.000953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:41.456 [2024-12-07 04:10:24.000965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:41.456 [2024-12-07 04:10:24.000975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.000986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.000996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.001006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.001017] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:41.456 [2024-12-07 04:10:24.001027] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:41.456 [2024-12-07 04:10:24.001038] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.001049] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:41.456 [2024-12-07 04:10:24.001060] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:41.456 [2024-12-07 04:10:24.001070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:41.456 [2024-12-07 04:10:24.001081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:41.456 [2024-12-07 04:10:24.001091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.001101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:41.456 [2024-12-07 04:10:24.001112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:25:41.456 [2024-12-07 04:10:24.001122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.040186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.040220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:41.456 [2024-12-07 04:10:24.040233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.080 ms 00:25:41.456 [2024-12-07 04:10:24.040247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.040337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.040348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:41.456 [2024-12-07 04:10:24.040359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:41.456 [2024-12-07 04:10:24.040369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.094923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.094966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:41.456 [2024-12-07 04:10:24.094979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.585 ms 00:25:41.456 [2024-12-07 04:10:24.094990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.095042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.095053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:41.456 [2024-12-07 04:10:24.095069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:41.456 [2024-12-07 04:10:24.095079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.095593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.095616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:41.456 [2024-12-07 04:10:24.095627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:25:41.456 [2024-12-07 04:10:24.095637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.095755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.095768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:41.456 [2024-12-07 04:10:24.095785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:25:41.456 [2024-12-07 04:10:24.095795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.115261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.115298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:41.456 [2024-12-07 04:10:24.115328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.477 ms 00:25:41.456 [2024-12-07 04:10:24.115339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.133598] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:41.456 [2024-12-07 04:10:24.133634] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:41.456 [2024-12-07 04:10:24.133648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.133659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:41.456 [2024-12-07 04:10:24.133686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.229 ms 00:25:41.456 [2024-12-07 04:10:24.133696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.162073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.162121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:41.456 [2024-12-07 04:10:24.162150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.380 ms 00:25:41.456 [2024-12-07 04:10:24.162161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.456 [2024-12-07 04:10:24.179029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.456 [2024-12-07 04:10:24.179065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:41.456 [2024-12-07 04:10:24.179077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.840 ms 00:25:41.456 [2024-12-07 04:10:24.179087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.196914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.196953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:41.715 [2024-12-07 04:10:24.196965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.803 ms 00:25:41.715 [2024-12-07 04:10:24.196974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.197726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.197761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:41.715 [2024-12-07 04:10:24.197778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.628 ms 00:25:41.715 [2024-12-07 04:10:24.197788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.278861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.278921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:41.715 [2024-12-07 04:10:24.278966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.182 ms 00:25:41.715 [2024-12-07 04:10:24.278977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.288809] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:41.715 [2024-12-07 04:10:24.291151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.291180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:41.715 [2024-12-07 04:10:24.291208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.146 ms 00:25:41.715 [2024-12-07 04:10:24.291218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.291295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.291309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:41.715 [2024-12-07 04:10:24.291323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:41.715 [2024-12-07 04:10:24.291334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.291407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.291420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:41.715 [2024-12-07 04:10:24.291431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:25:41.715 [2024-12-07 04:10:24.291441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.291460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.291471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:41.715 [2024-12-07 04:10:24.291481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:41.715 [2024-12-07 04:10:24.291491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.291544] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:41.715 [2024-12-07 04:10:24.291557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.291567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:41.715 [2024-12-07 04:10:24.291577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:41.715 [2024-12-07 04:10:24.291587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.326011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.326052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:41.715 [2024-12-07 04:10:24.326072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.460 ms 00:25:41.715 [2024-12-07 04:10:24.326082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.326167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:41.715 [2024-12-07 04:10:24.326178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:41.715 [2024-12-07 04:10:24.326189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:25:41.715 [2024-12-07 04:10:24.326199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:41.715 [2024-12-07 04:10:24.327369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.804 ms, result 0 00:25:42.648  [2024-12-07T04:10:26.755Z] Copying: 23/1024 [MB] (23 MBps) [2024-12-07T04:10:27.690Z] Copying: 48/1024 [MB] (24 MBps) [2024-12-07T04:10:28.626Z] Copying: 73/1024 [MB] (25 MBps) [2024-12-07T04:10:29.563Z] Copying: 99/1024 [MB] (25 MBps) [2024-12-07T04:10:30.497Z] Copying: 125/1024 [MB] (25 MBps) [2024-12-07T04:10:31.431Z] Copying: 150/1024 [MB] (25 MBps) [2024-12-07T04:10:32.365Z] Copying: 175/1024 [MB] (24 MBps) [2024-12-07T04:10:33.738Z] Copying: 200/1024 [MB] (24 MBps) [2024-12-07T04:10:34.674Z] Copying: 223/1024 [MB] (23 MBps) [2024-12-07T04:10:35.607Z] Copying: 246/1024 [MB] (22 MBps) [2024-12-07T04:10:36.539Z] Copying: 271/1024 [MB] (25 MBps) [2024-12-07T04:10:37.472Z] Copying: 297/1024 [MB] (25 MBps) [2024-12-07T04:10:38.406Z] Copying: 322/1024 [MB] (25 MBps) [2024-12-07T04:10:39.342Z] Copying: 348/1024 [MB] (25 MBps) [2024-12-07T04:10:40.718Z] Copying: 373/1024 [MB] (25 MBps) [2024-12-07T04:10:41.652Z] Copying: 396/1024 [MB] (23 MBps) [2024-12-07T04:10:42.586Z] Copying: 421/1024 [MB] (24 MBps) [2024-12-07T04:10:43.521Z] Copying: 447/1024 [MB] (25 MBps) [2024-12-07T04:10:44.458Z] Copying: 472/1024 [MB] (25 MBps) [2024-12-07T04:10:45.416Z] Copying: 497/1024 [MB] (24 MBps) [2024-12-07T04:10:46.418Z] Copying: 520/1024 [MB] (23 MBps) [2024-12-07T04:10:47.354Z] Copying: 544/1024 [MB] (23 MBps) [2024-12-07T04:10:48.729Z] Copying: 568/1024 [MB] (24 MBps) [2024-12-07T04:10:49.666Z] Copying: 593/1024 [MB] (25 MBps) [2024-12-07T04:10:50.602Z] Copying: 619/1024 [MB] (25 MBps) [2024-12-07T04:10:51.539Z] Copying: 642/1024 [MB] (23 MBps) [2024-12-07T04:10:52.474Z] Copying: 666/1024 [MB] (24 MBps) [2024-12-07T04:10:53.410Z] Copying: 690/1024 [MB] (24 MBps) [2024-12-07T04:10:54.349Z] Copying: 712/1024 [MB] (21 MBps) [2024-12-07T04:10:55.725Z] Copying: 735/1024 [MB] (22 MBps) [2024-12-07T04:10:56.292Z] Copying: 757/1024 [MB] (22 MBps) [2024-12-07T04:10:57.670Z] Copying: 780/1024 [MB] (22 MBps) [2024-12-07T04:10:58.607Z] Copying: 805/1024 [MB] (25 MBps) [2024-12-07T04:10:59.546Z] Copying: 827/1024 [MB] (22 MBps) [2024-12-07T04:11:00.483Z] Copying: 851/1024 [MB] (23 MBps) [2024-12-07T04:11:01.421Z] Copying: 874/1024 [MB] (23 MBps) [2024-12-07T04:11:02.359Z] Copying: 896/1024 [MB] (22 MBps) [2024-12-07T04:11:03.297Z] Copying: 919/1024 [MB] (23 MBps) [2024-12-07T04:11:04.677Z] Copying: 945/1024 [MB] (25 MBps) [2024-12-07T04:11:05.616Z] Copying: 971/1024 [MB] (25 MBps) [2024-12-07T04:11:06.551Z] Copying: 997/1024 [MB] (25 MBps) [2024-12-07T04:11:07.119Z] Copying: 1022/1024 [MB] (25 MBps) [2024-12-07T04:11:07.119Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-07 04:11:07.031437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.383 [2024-12-07 04:11:07.031499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:24.383 [2024-12-07 04:11:07.031524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:24.383 [2024-12-07 04:11:07.031535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.383 [2024-12-07 04:11:07.032336] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:24.383 [2024-12-07 04:11:07.038133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.383 [2024-12-07 04:11:07.038173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:24.383 [2024-12-07 04:11:07.038187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.773 ms 00:26:24.383 [2024-12-07 04:11:07.038198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.383 [2024-12-07 04:11:07.049604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.383 [2024-12-07 04:11:07.049643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:24.383 [2024-12-07 04:11:07.049673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.835 ms 00:26:24.383 [2024-12-07 04:11:07.049691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.383 [2024-12-07 04:11:07.073385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.383 [2024-12-07 04:11:07.073428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:24.383 [2024-12-07 04:11:07.073443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.714 ms 00:26:24.383 [2024-12-07 04:11:07.073454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.383 [2024-12-07 04:11:07.078452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.383 [2024-12-07 04:11:07.078484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:24.383 [2024-12-07 04:11:07.078497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.959 ms 00:26:24.383 [2024-12-07 04:11:07.078529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.383 [2024-12-07 04:11:07.115476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.383 [2024-12-07 04:11:07.115515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:24.383 [2024-12-07 04:11:07.115529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.965 ms 00:26:24.383 [2024-12-07 04:11:07.115540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.642 [2024-12-07 04:11:07.136290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.642 [2024-12-07 04:11:07.136326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:24.642 [2024-12-07 04:11:07.136340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.704 ms 00:26:24.642 [2024-12-07 04:11:07.136366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.642 [2024-12-07 04:11:07.257583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.642 [2024-12-07 04:11:07.257627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:24.642 [2024-12-07 04:11:07.257642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 121.340 ms 00:26:24.642 [2024-12-07 04:11:07.257652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.642 [2024-12-07 04:11:07.292944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.642 [2024-12-07 04:11:07.292980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:24.642 [2024-12-07 04:11:07.292994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.332 ms 00:26:24.642 [2024-12-07 04:11:07.293004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.642 [2024-12-07 04:11:07.327221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.642 [2024-12-07 04:11:07.327257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:24.642 [2024-12-07 04:11:07.327270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.217 ms 00:26:24.642 [2024-12-07 04:11:07.327296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.642 [2024-12-07 04:11:07.361029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.642 [2024-12-07 04:11:07.361066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:24.642 [2024-12-07 04:11:07.361079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.749 ms 00:26:24.642 [2024-12-07 04:11:07.361088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.903 [2024-12-07 04:11:07.395713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.903 [2024-12-07 04:11:07.395748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:24.903 [2024-12-07 04:11:07.395760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.578 ms 00:26:24.903 [2024-12-07 04:11:07.395769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.903 [2024-12-07 04:11:07.395823] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:24.903 [2024-12-07 04:11:07.395839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 103424 / 261120 wr_cnt: 1 state: open 00:26:24.903 [2024-12-07 04:11:07.395852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.395999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:24.903 [2024-12-07 04:11:07.396506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:24.904 [2024-12-07 04:11:07.396904] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:24.904 [2024-12-07 04:11:07.396914] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b6778f2e-d435-43bc-a468-916c780de568 00:26:24.904 [2024-12-07 04:11:07.396925] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 103424 00:26:24.904 [2024-12-07 04:11:07.396935] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 104384 00:26:24.904 [2024-12-07 04:11:07.396952] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 103424 00:26:24.904 [2024-12-07 04:11:07.396963] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0093 00:26:24.904 [2024-12-07 04:11:07.396988] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:24.904 [2024-12-07 04:11:07.396998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:24.904 [2024-12-07 04:11:07.397008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:24.904 [2024-12-07 04:11:07.397017] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:24.904 [2024-12-07 04:11:07.397026] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:24.904 [2024-12-07 04:11:07.397036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.904 [2024-12-07 04:11:07.397046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:24.904 [2024-12-07 04:11:07.397057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.217 ms 00:26:24.904 [2024-12-07 04:11:07.397067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.416058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.904 [2024-12-07 04:11:07.416090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:24.904 [2024-12-07 04:11:07.416107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.986 ms 00:26:24.904 [2024-12-07 04:11:07.416117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.416675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:24.904 [2024-12-07 04:11:07.416694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:24.904 [2024-12-07 04:11:07.416704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.523 ms 00:26:24.904 [2024-12-07 04:11:07.416715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.467473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.904 [2024-12-07 04:11:07.467508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:24.904 [2024-12-07 04:11:07.467521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.904 [2024-12-07 04:11:07.467531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.467585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.904 [2024-12-07 04:11:07.467596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:24.904 [2024-12-07 04:11:07.467606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.904 [2024-12-07 04:11:07.467616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.467693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.904 [2024-12-07 04:11:07.467712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:24.904 [2024-12-07 04:11:07.467722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.904 [2024-12-07 04:11:07.467732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.467747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.904 [2024-12-07 04:11:07.467758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:24.904 [2024-12-07 04:11:07.467769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.904 [2024-12-07 04:11:07.467794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:24.904 [2024-12-07 04:11:07.590201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:24.904 [2024-12-07 04:11:07.590287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:24.904 [2024-12-07 04:11:07.590303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:24.904 [2024-12-07 04:11:07.590313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.684989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:25.164 [2024-12-07 04:11:07.685058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:25.164 [2024-12-07 04:11:07.685192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:25.164 [2024-12-07 04:11:07.685265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:25.164 [2024-12-07 04:11:07.685410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:25.164 [2024-12-07 04:11:07.685498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:25.164 [2024-12-07 04:11:07.685570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:25.164 [2024-12-07 04:11:07.685640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:25.164 [2024-12-07 04:11:07.685650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:25.164 [2024-12-07 04:11:07.685660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:25.164 [2024-12-07 04:11:07.685780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 657.004 ms, result 0 00:26:26.550 00:26:26.550 00:26:26.550 04:11:08 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:26:26.550 [2024-12-07 04:11:09.057916] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:26:26.550 [2024-12-07 04:11:09.058060] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80595 ] 00:26:26.550 [2024-12-07 04:11:09.236129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.809 [2024-12-07 04:11:09.339558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.067 [2024-12-07 04:11:09.689882] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.067 [2024-12-07 04:11:09.689966] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:27.327 [2024-12-07 04:11:09.849890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.327 [2024-12-07 04:11:09.849956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:27.327 [2024-12-07 04:11:09.849973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:27.327 [2024-12-07 04:11:09.849999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.327 [2024-12-07 04:11:09.850048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.327 [2024-12-07 04:11:09.850063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:27.327 [2024-12-07 04:11:09.850074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:27.327 [2024-12-07 04:11:09.850085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.327 [2024-12-07 04:11:09.850106] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:27.328 [2024-12-07 04:11:09.851125] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:27.328 [2024-12-07 04:11:09.851154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.851166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:27.328 [2024-12-07 04:11:09.851177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:26:27.328 [2024-12-07 04:11:09.851188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.852639] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:27.328 [2024-12-07 04:11:09.871300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.871338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:27.328 [2024-12-07 04:11:09.871370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.692 ms 00:26:27.328 [2024-12-07 04:11:09.871381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.871460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.871473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:27.328 [2024-12-07 04:11:09.871483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:26:27.328 [2024-12-07 04:11:09.871493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.878451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.878480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:27.328 [2024-12-07 04:11:09.878492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:26:27.328 [2024-12-07 04:11:09.878506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.878597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.878611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:27.328 [2024-12-07 04:11:09.878622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:26:27.328 [2024-12-07 04:11:09.878632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.878672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.878684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:27.328 [2024-12-07 04:11:09.878694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:27.328 [2024-12-07 04:11:09.878704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.878731] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:27.328 [2024-12-07 04:11:09.883397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.883428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:27.328 [2024-12-07 04:11:09.883442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.679 ms 00:26:27.328 [2024-12-07 04:11:09.883452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.883500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.883511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:27.328 [2024-12-07 04:11:09.883522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:27.328 [2024-12-07 04:11:09.883532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.883584] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:27.328 [2024-12-07 04:11:09.883609] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:27.328 [2024-12-07 04:11:09.883643] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:27.328 [2024-12-07 04:11:09.883663] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:27.328 [2024-12-07 04:11:09.883759] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:27.328 [2024-12-07 04:11:09.883772] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:27.328 [2024-12-07 04:11:09.883801] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:27.328 [2024-12-07 04:11:09.883814] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:27.328 [2024-12-07 04:11:09.883826] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:27.328 [2024-12-07 04:11:09.883837] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:27.328 [2024-12-07 04:11:09.883848] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:27.328 [2024-12-07 04:11:09.883862] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:27.328 [2024-12-07 04:11:09.883872] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:27.328 [2024-12-07 04:11:09.883882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.883892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:27.328 [2024-12-07 04:11:09.883903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:26:27.328 [2024-12-07 04:11:09.883913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.883998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.884010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:27.328 [2024-12-07 04:11:09.884020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:27.328 [2024-12-07 04:11:09.884031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.328 [2024-12-07 04:11:09.884125] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:27.328 [2024-12-07 04:11:09.884145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:27.328 [2024-12-07 04:11:09.884156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:27.328 [2024-12-07 04:11:09.884186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:27.328 [2024-12-07 04:11:09.884214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.328 [2024-12-07 04:11:09.884234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:27.328 [2024-12-07 04:11:09.884244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:27.328 [2024-12-07 04:11:09.884253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:27.328 [2024-12-07 04:11:09.884272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:27.328 [2024-12-07 04:11:09.884282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:27.328 [2024-12-07 04:11:09.884291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:27.328 [2024-12-07 04:11:09.884310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:27.328 [2024-12-07 04:11:09.884338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:27.328 [2024-12-07 04:11:09.884366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:27.328 [2024-12-07 04:11:09.884395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:27.328 [2024-12-07 04:11:09.884423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884432] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:27.328 [2024-12-07 04:11:09.884450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.328 [2024-12-07 04:11:09.884468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:27.328 [2024-12-07 04:11:09.884477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:27.328 [2024-12-07 04:11:09.884486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:27.328 [2024-12-07 04:11:09.884495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:27.328 [2024-12-07 04:11:09.884504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:27.328 [2024-12-07 04:11:09.884513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:27.328 [2024-12-07 04:11:09.884531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:27.328 [2024-12-07 04:11:09.884541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884551] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:27.328 [2024-12-07 04:11:09.884561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:27.328 [2024-12-07 04:11:09.884571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:27.328 [2024-12-07 04:11:09.884591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:27.328 [2024-12-07 04:11:09.884601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:27.328 [2024-12-07 04:11:09.884610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:27.328 [2024-12-07 04:11:09.884620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:27.328 [2024-12-07 04:11:09.884629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:27.328 [2024-12-07 04:11:09.884638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:27.328 [2024-12-07 04:11:09.884649] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:27.328 [2024-12-07 04:11:09.884661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:27.328 [2024-12-07 04:11:09.884690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:27.328 [2024-12-07 04:11:09.884700] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:27.328 [2024-12-07 04:11:09.884710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:27.328 [2024-12-07 04:11:09.884721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:27.328 [2024-12-07 04:11:09.884732] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:27.328 [2024-12-07 04:11:09.884742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:27.328 [2024-12-07 04:11:09.884752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:27.328 [2024-12-07 04:11:09.884762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:27.328 [2024-12-07 04:11:09.884773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884804] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:27.328 [2024-12-07 04:11:09.884825] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:27.328 [2024-12-07 04:11:09.884836] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884847] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:27.328 [2024-12-07 04:11:09.884858] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:27.328 [2024-12-07 04:11:09.884868] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:27.328 [2024-12-07 04:11:09.884879] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:27.328 [2024-12-07 04:11:09.884890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.328 [2024-12-07 04:11:09.884901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:27.329 [2024-12-07 04:11:09.884912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.820 ms 00:26:27.329 [2024-12-07 04:11:09.884922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.921571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.921611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:27.329 [2024-12-07 04:11:09.921624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.653 ms 00:26:27.329 [2024-12-07 04:11:09.921638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.921731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.921741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:27.329 [2024-12-07 04:11:09.921752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:27.329 [2024-12-07 04:11:09.921762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.977892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.977935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:27.329 [2024-12-07 04:11:09.977949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.163 ms 00:26:27.329 [2024-12-07 04:11:09.977959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.978012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.978023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:27.329 [2024-12-07 04:11:09.978039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:26:27.329 [2024-12-07 04:11:09.978049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.978559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.978582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:27.329 [2024-12-07 04:11:09.978593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:26:27.329 [2024-12-07 04:11:09.978603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.978719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.978733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:27.329 [2024-12-07 04:11:09.978749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:27.329 [2024-12-07 04:11:09.978759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:09.998399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:09.998436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:27.329 [2024-12-07 04:11:09.998466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.652 ms 00:26:27.329 [2024-12-07 04:11:09.998476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:10.017222] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:26:27.329 [2024-12-07 04:11:10.017278] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:27.329 [2024-12-07 04:11:10.017294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:10.017304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:27.329 [2024-12-07 04:11:10.017333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.747 ms 00:26:27.329 [2024-12-07 04:11:10.017343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.329 [2024-12-07 04:11:10.046488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.329 [2024-12-07 04:11:10.046529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:27.329 [2024-12-07 04:11:10.046543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.150 ms 00:26:27.329 [2024-12-07 04:11:10.046555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.598 [2024-12-07 04:11:10.065405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.598 [2024-12-07 04:11:10.065444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:27.598 [2024-12-07 04:11:10.065457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.836 ms 00:26:27.598 [2024-12-07 04:11:10.065467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.598 [2024-12-07 04:11:10.083319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.598 [2024-12-07 04:11:10.083355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:27.598 [2024-12-07 04:11:10.083368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.840 ms 00:26:27.598 [2024-12-07 04:11:10.083378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.598 [2024-12-07 04:11:10.084164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.598 [2024-12-07 04:11:10.084196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:27.599 [2024-12-07 04:11:10.084212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:26:27.599 [2024-12-07 04:11:10.084222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.165440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.165505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:27.599 [2024-12-07 04:11:10.165528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.325 ms 00:26:27.599 [2024-12-07 04:11:10.165555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.175650] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:27.599 [2024-12-07 04:11:10.178170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.178196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:27.599 [2024-12-07 04:11:10.178209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.589 ms 00:26:27.599 [2024-12-07 04:11:10.178219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.178317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.178348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:27.599 [2024-12-07 04:11:10.178363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:27.599 [2024-12-07 04:11:10.178373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.179859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.179898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:27.599 [2024-12-07 04:11:10.179927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:26:27.599 [2024-12-07 04:11:10.179937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.179983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.179995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:27.599 [2024-12-07 04:11:10.180006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:27.599 [2024-12-07 04:11:10.180016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.180056] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:27.599 [2024-12-07 04:11:10.180069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.180080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:27.599 [2024-12-07 04:11:10.180090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:27.599 [2024-12-07 04:11:10.180100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.215137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.215173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:27.599 [2024-12-07 04:11:10.215209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.076 ms 00:26:27.599 [2024-12-07 04:11:10.215219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.215288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:27.599 [2024-12-07 04:11:10.215301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:27.599 [2024-12-07 04:11:10.215311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:27.599 [2024-12-07 04:11:10.215322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:27.599 [2024-12-07 04:11:10.216383] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.644 ms, result 0 00:26:28.979  [2024-12-07T04:11:12.651Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-07T04:11:13.591Z] Copying: 47/1024 [MB] (25 MBps) [2024-12-07T04:11:14.563Z] Copying: 72/1024 [MB] (24 MBps) [2024-12-07T04:11:15.502Z] Copying: 97/1024 [MB] (24 MBps) [2024-12-07T04:11:16.453Z] Copying: 122/1024 [MB] (25 MBps) [2024-12-07T04:11:17.835Z] Copying: 147/1024 [MB] (25 MBps) [2024-12-07T04:11:18.775Z] Copying: 173/1024 [MB] (26 MBps) [2024-12-07T04:11:19.716Z] Copying: 199/1024 [MB] (25 MBps) [2024-12-07T04:11:20.656Z] Copying: 224/1024 [MB] (24 MBps) [2024-12-07T04:11:21.597Z] Copying: 250/1024 [MB] (26 MBps) [2024-12-07T04:11:22.538Z] Copying: 276/1024 [MB] (25 MBps) [2024-12-07T04:11:23.488Z] Copying: 302/1024 [MB] (26 MBps) [2024-12-07T04:11:24.427Z] Copying: 328/1024 [MB] (25 MBps) [2024-12-07T04:11:25.806Z] Copying: 354/1024 [MB] (26 MBps) [2024-12-07T04:11:26.747Z] Copying: 380/1024 [MB] (25 MBps) [2024-12-07T04:11:27.686Z] Copying: 406/1024 [MB] (26 MBps) [2024-12-07T04:11:28.627Z] Copying: 431/1024 [MB] (24 MBps) [2024-12-07T04:11:29.567Z] Copying: 456/1024 [MB] (25 MBps) [2024-12-07T04:11:30.508Z] Copying: 481/1024 [MB] (25 MBps) [2024-12-07T04:11:31.448Z] Copying: 507/1024 [MB] (25 MBps) [2024-12-07T04:11:32.826Z] Copying: 532/1024 [MB] (25 MBps) [2024-12-07T04:11:33.394Z] Copying: 558/1024 [MB] (25 MBps) [2024-12-07T04:11:34.773Z] Copying: 584/1024 [MB] (25 MBps) [2024-12-07T04:11:35.713Z] Copying: 610/1024 [MB] (25 MBps) [2024-12-07T04:11:36.653Z] Copying: 635/1024 [MB] (25 MBps) [2024-12-07T04:11:37.593Z] Copying: 660/1024 [MB] (25 MBps) [2024-12-07T04:11:38.533Z] Copying: 686/1024 [MB] (25 MBps) [2024-12-07T04:11:39.473Z] Copying: 713/1024 [MB] (26 MBps) [2024-12-07T04:11:40.418Z] Copying: 739/1024 [MB] (26 MBps) [2024-12-07T04:11:41.800Z] Copying: 765/1024 [MB] (26 MBps) [2024-12-07T04:11:42.744Z] Copying: 792/1024 [MB] (26 MBps) [2024-12-07T04:11:43.733Z] Copying: 818/1024 [MB] (26 MBps) [2024-12-07T04:11:44.669Z] Copying: 845/1024 [MB] (26 MBps) [2024-12-07T04:11:45.608Z] Copying: 872/1024 [MB] (26 MBps) [2024-12-07T04:11:46.544Z] Copying: 899/1024 [MB] (27 MBps) [2024-12-07T04:11:47.480Z] Copying: 926/1024 [MB] (26 MBps) [2024-12-07T04:11:48.424Z] Copying: 951/1024 [MB] (25 MBps) [2024-12-07T04:11:49.805Z] Copying: 978/1024 [MB] (26 MBps) [2024-12-07T04:11:50.378Z] Copying: 1003/1024 [MB] (25 MBps) [2024-12-07T04:11:50.378Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-07 04:11:50.234362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.234448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:07.642 [2024-12-07 04:11:50.234490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:07.642 [2024-12-07 04:11:50.234510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.642 [2024-12-07 04:11:50.234550] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:07.642 [2024-12-07 04:11:50.241117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.241168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:07.642 [2024-12-07 04:11:50.241186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.546 ms 00:27:07.642 [2024-12-07 04:11:50.241202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.642 [2024-12-07 04:11:50.241476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.241500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:07.642 [2024-12-07 04:11:50.241516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:27:07.642 [2024-12-07 04:11:50.241537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.642 [2024-12-07 04:11:50.247658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.247700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:07.642 [2024-12-07 04:11:50.247713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.108 ms 00:27:07.642 [2024-12-07 04:11:50.247724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.642 [2024-12-07 04:11:50.252768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.252801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:07.642 [2024-12-07 04:11:50.252814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.013 ms 00:27:07.642 [2024-12-07 04:11:50.252831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.642 [2024-12-07 04:11:50.288800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.288838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:07.642 [2024-12-07 04:11:50.288853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.985 ms 00:27:07.642 [2024-12-07 04:11:50.288863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.642 [2024-12-07 04:11:50.309697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.642 [2024-12-07 04:11:50.309736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:07.642 [2024-12-07 04:11:50.309751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.827 ms 00:27:07.642 [2024-12-07 04:11:50.309762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.904 [2024-12-07 04:11:50.438377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.904 [2024-12-07 04:11:50.438431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:07.904 [2024-12-07 04:11:50.438446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.778 ms 00:27:07.904 [2024-12-07 04:11:50.438457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.904 [2024-12-07 04:11:50.473615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.904 [2024-12-07 04:11:50.473659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:07.904 [2024-12-07 04:11:50.473690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.197 ms 00:27:07.904 [2024-12-07 04:11:50.473700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.904 [2024-12-07 04:11:50.508032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.904 [2024-12-07 04:11:50.508073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:07.904 [2024-12-07 04:11:50.508086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.350 ms 00:27:07.904 [2024-12-07 04:11:50.508096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.904 [2024-12-07 04:11:50.542606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.904 [2024-12-07 04:11:50.542641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:07.904 [2024-12-07 04:11:50.542654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.501 ms 00:27:07.904 [2024-12-07 04:11:50.542680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.904 [2024-12-07 04:11:50.576727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.904 [2024-12-07 04:11:50.576761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:07.904 [2024-12-07 04:11:50.576774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.010 ms 00:27:07.904 [2024-12-07 04:11:50.576784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.904 [2024-12-07 04:11:50.576835] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:07.904 [2024-12-07 04:11:50.576851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:27:07.904 [2024-12-07 04:11:50.576864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.576999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:07.904 [2024-12-07 04:11:50.577303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:07.905 [2024-12-07 04:11:50.577977] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:07.905 [2024-12-07 04:11:50.577988] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b6778f2e-d435-43bc-a468-916c780de568 00:27:07.905 [2024-12-07 04:11:50.577998] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:27:07.905 [2024-12-07 04:11:50.578008] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 28608 00:27:07.905 [2024-12-07 04:11:50.578018] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 27648 00:27:07.905 [2024-12-07 04:11:50.578028] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0347 00:27:07.905 [2024-12-07 04:11:50.578051] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:07.905 [2024-12-07 04:11:50.578071] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:07.905 [2024-12-07 04:11:50.578081] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:07.905 [2024-12-07 04:11:50.578089] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:07.905 [2024-12-07 04:11:50.578098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:07.905 [2024-12-07 04:11:50.578108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.905 [2024-12-07 04:11:50.578118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:07.905 [2024-12-07 04:11:50.578129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:27:07.905 [2024-12-07 04:11:50.578138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.905 [2024-12-07 04:11:50.597642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.905 [2024-12-07 04:11:50.597674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:07.905 [2024-12-07 04:11:50.597691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.499 ms 00:27:07.905 [2024-12-07 04:11:50.597701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:07.905 [2024-12-07 04:11:50.598277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:07.905 [2024-12-07 04:11:50.598314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:07.905 [2024-12-07 04:11:50.598325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:27:07.905 [2024-12-07 04:11:50.598335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.648061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.648099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:08.166 [2024-12-07 04:11:50.648111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.648121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.648187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.648198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:08.166 [2024-12-07 04:11:50.648208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.648217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.648292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.648304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:08.166 [2024-12-07 04:11:50.648319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.648329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.648344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.648354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:08.166 [2024-12-07 04:11:50.648364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.648374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.766114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.766177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:08.166 [2024-12-07 04:11:50.766190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.766200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.861874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.861923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:08.166 [2024-12-07 04:11:50.861948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.861960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.166 [2024-12-07 04:11:50.862061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.166 [2024-12-07 04:11:50.862073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.166 [2024-12-07 04:11:50.862094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.166 [2024-12-07 04:11:50.862109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.167 [2024-12-07 04:11:50.862146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.167 [2024-12-07 04:11:50.862157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.167 [2024-12-07 04:11:50.862167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.167 [2024-12-07 04:11:50.862177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.167 [2024-12-07 04:11:50.862299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.167 [2024-12-07 04:11:50.862312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.167 [2024-12-07 04:11:50.862323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.167 [2024-12-07 04:11:50.862333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.167 [2024-12-07 04:11:50.862374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.167 [2024-12-07 04:11:50.862387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:08.167 [2024-12-07 04:11:50.862398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.167 [2024-12-07 04:11:50.862408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.167 [2024-12-07 04:11:50.862449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.167 [2024-12-07 04:11:50.862460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.167 [2024-12-07 04:11:50.862471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.167 [2024-12-07 04:11:50.862480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.167 [2024-12-07 04:11:50.862525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.167 [2024-12-07 04:11:50.862538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.167 [2024-12-07 04:11:50.862548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.167 [2024-12-07 04:11:50.862558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.167 [2024-12-07 04:11:50.862678] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 629.316 ms, result 0 00:27:09.548 00:27:09.548 00:27:09.548 04:11:51 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:10.926 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:27:10.926 04:11:53 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:10.926 04:11:53 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:27:10.926 04:11:53 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:11.186 Process with pid 78997 is not found 00:27:11.186 Remove shared memory files 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 78997 00:27:11.186 04:11:53 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 78997 ']' 00:27:11.186 04:11:53 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 78997 00:27:11.186 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78997) - No such process 00:27:11.186 04:11:53 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 78997 is not found' 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:11.186 04:11:53 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:27:11.186 00:27:11.186 real 3m17.665s 00:27:11.186 user 3m5.787s 00:27:11.186 sys 0m13.364s 00:27:11.186 04:11:53 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:11.186 04:11:53 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:27:11.186 ************************************ 00:27:11.186 END TEST ftl_restore 00:27:11.186 ************************************ 00:27:11.186 04:11:53 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:11.186 04:11:53 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:11.186 04:11:53 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:11.186 04:11:53 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:11.186 ************************************ 00:27:11.186 START TEST ftl_dirty_shutdown 00:27:11.186 ************************************ 00:27:11.186 04:11:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:27:11.186 * Looking for test storage... 00:27:11.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.186 04:11:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:11.446 04:11:53 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:11.446 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.447 --rc genhtml_branch_coverage=1 00:27:11.447 --rc genhtml_function_coverage=1 00:27:11.447 --rc genhtml_legend=1 00:27:11.447 --rc geninfo_all_blocks=1 00:27:11.447 --rc geninfo_unexecuted_blocks=1 00:27:11.447 00:27:11.447 ' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.447 --rc genhtml_branch_coverage=1 00:27:11.447 --rc genhtml_function_coverage=1 00:27:11.447 --rc genhtml_legend=1 00:27:11.447 --rc geninfo_all_blocks=1 00:27:11.447 --rc geninfo_unexecuted_blocks=1 00:27:11.447 00:27:11.447 ' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.447 --rc genhtml_branch_coverage=1 00:27:11.447 --rc genhtml_function_coverage=1 00:27:11.447 --rc genhtml_legend=1 00:27:11.447 --rc geninfo_all_blocks=1 00:27:11.447 --rc geninfo_unexecuted_blocks=1 00:27:11.447 00:27:11.447 ' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:11.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:11.447 --rc genhtml_branch_coverage=1 00:27:11.447 --rc genhtml_function_coverage=1 00:27:11.447 --rc genhtml_legend=1 00:27:11.447 --rc geninfo_all_blocks=1 00:27:11.447 --rc geninfo_unexecuted_blocks=1 00:27:11.447 00:27:11.447 ' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81117 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81117 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81117 ']' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:11.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:11.447 04:11:54 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:11.447 [2024-12-07 04:11:54.149871] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:27:11.447 [2024-12-07 04:11:54.150021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81117 ] 00:27:11.707 [2024-12-07 04:11:54.333284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:11.707 [2024-12-07 04:11:54.441718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:12.646 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:12.905 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:27:13.164 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:13.164 { 00:27:13.164 "name": "nvme0n1", 00:27:13.164 "aliases": [ 00:27:13.164 "6f3d064a-64ff-4ebb-bfc5-609dfd1ba70a" 00:27:13.164 ], 00:27:13.164 "product_name": "NVMe disk", 00:27:13.164 "block_size": 4096, 00:27:13.164 "num_blocks": 1310720, 00:27:13.164 "uuid": "6f3d064a-64ff-4ebb-bfc5-609dfd1ba70a", 00:27:13.164 "numa_id": -1, 00:27:13.164 "assigned_rate_limits": { 00:27:13.164 "rw_ios_per_sec": 0, 00:27:13.164 "rw_mbytes_per_sec": 0, 00:27:13.164 "r_mbytes_per_sec": 0, 00:27:13.164 "w_mbytes_per_sec": 0 00:27:13.164 }, 00:27:13.164 "claimed": true, 00:27:13.164 "claim_type": "read_many_write_one", 00:27:13.164 "zoned": false, 00:27:13.164 "supported_io_types": { 00:27:13.164 "read": true, 00:27:13.165 "write": true, 00:27:13.165 "unmap": true, 00:27:13.165 "flush": true, 00:27:13.165 "reset": true, 00:27:13.165 "nvme_admin": true, 00:27:13.165 "nvme_io": true, 00:27:13.165 "nvme_io_md": false, 00:27:13.165 "write_zeroes": true, 00:27:13.165 "zcopy": false, 00:27:13.165 "get_zone_info": false, 00:27:13.165 "zone_management": false, 00:27:13.165 "zone_append": false, 00:27:13.165 "compare": true, 00:27:13.165 "compare_and_write": false, 00:27:13.165 "abort": true, 00:27:13.165 "seek_hole": false, 00:27:13.165 "seek_data": false, 00:27:13.165 "copy": true, 00:27:13.165 "nvme_iov_md": false 00:27:13.165 }, 00:27:13.165 "driver_specific": { 00:27:13.165 "nvme": [ 00:27:13.165 { 00:27:13.165 "pci_address": "0000:00:11.0", 00:27:13.165 "trid": { 00:27:13.165 "trtype": "PCIe", 00:27:13.165 "traddr": "0000:00:11.0" 00:27:13.165 }, 00:27:13.165 "ctrlr_data": { 00:27:13.165 "cntlid": 0, 00:27:13.165 "vendor_id": "0x1b36", 00:27:13.165 "model_number": "QEMU NVMe Ctrl", 00:27:13.165 "serial_number": "12341", 00:27:13.165 "firmware_revision": "8.0.0", 00:27:13.165 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:13.165 "oacs": { 00:27:13.165 "security": 0, 00:27:13.165 "format": 1, 00:27:13.165 "firmware": 0, 00:27:13.165 "ns_manage": 1 00:27:13.165 }, 00:27:13.165 "multi_ctrlr": false, 00:27:13.165 "ana_reporting": false 00:27:13.165 }, 00:27:13.165 "vs": { 00:27:13.165 "nvme_version": "1.4" 00:27:13.165 }, 00:27:13.165 "ns_data": { 00:27:13.165 "id": 1, 00:27:13.165 "can_share": false 00:27:13.165 } 00:27:13.165 } 00:27:13.165 ], 00:27:13.165 "mp_policy": "active_passive" 00:27:13.165 } 00:27:13.165 } 00:27:13.165 ]' 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:13.165 04:11:55 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:13.423 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=4c8dd4ab-225b-4037-af49-e1e25b8bf6bd 00:27:13.423 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:13.423 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4c8dd4ab-225b-4037-af49-e1e25b8bf6bd 00:27:13.683 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:27:13.942 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=dd70bdd4-aa90-4f31-9eb7-498c14de76da 00:27:13.942 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u dd70bdd4-aa90-4f31-9eb7-498c14de76da 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:14.203 { 00:27:14.203 "name": "060c9d3b-67f6-4437-877a-e741914f1143", 00:27:14.203 "aliases": [ 00:27:14.203 "lvs/nvme0n1p0" 00:27:14.203 ], 00:27:14.203 "product_name": "Logical Volume", 00:27:14.203 "block_size": 4096, 00:27:14.203 "num_blocks": 26476544, 00:27:14.203 "uuid": "060c9d3b-67f6-4437-877a-e741914f1143", 00:27:14.203 "assigned_rate_limits": { 00:27:14.203 "rw_ios_per_sec": 0, 00:27:14.203 "rw_mbytes_per_sec": 0, 00:27:14.203 "r_mbytes_per_sec": 0, 00:27:14.203 "w_mbytes_per_sec": 0 00:27:14.203 }, 00:27:14.203 "claimed": false, 00:27:14.203 "zoned": false, 00:27:14.203 "supported_io_types": { 00:27:14.203 "read": true, 00:27:14.203 "write": true, 00:27:14.203 "unmap": true, 00:27:14.203 "flush": false, 00:27:14.203 "reset": true, 00:27:14.203 "nvme_admin": false, 00:27:14.203 "nvme_io": false, 00:27:14.203 "nvme_io_md": false, 00:27:14.203 "write_zeroes": true, 00:27:14.203 "zcopy": false, 00:27:14.203 "get_zone_info": false, 00:27:14.203 "zone_management": false, 00:27:14.203 "zone_append": false, 00:27:14.203 "compare": false, 00:27:14.203 "compare_and_write": false, 00:27:14.203 "abort": false, 00:27:14.203 "seek_hole": true, 00:27:14.203 "seek_data": true, 00:27:14.203 "copy": false, 00:27:14.203 "nvme_iov_md": false 00:27:14.203 }, 00:27:14.203 "driver_specific": { 00:27:14.203 "lvol": { 00:27:14.203 "lvol_store_uuid": "dd70bdd4-aa90-4f31-9eb7-498c14de76da", 00:27:14.203 "base_bdev": "nvme0n1", 00:27:14.203 "thin_provision": true, 00:27:14.203 "num_allocated_clusters": 0, 00:27:14.203 "snapshot": false, 00:27:14.203 "clone": false, 00:27:14.203 "esnap_clone": false 00:27:14.203 } 00:27:14.203 } 00:27:14.203 } 00:27:14.203 ]' 00:27:14.203 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:14.463 04:11:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:27:14.722 04:11:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:27:14.722 04:11:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:27:14.722 04:11:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.722 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.722 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:14.722 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:14.723 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:14.723 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 060c9d3b-67f6-4437-877a-e741914f1143 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:14.983 { 00:27:14.983 "name": "060c9d3b-67f6-4437-877a-e741914f1143", 00:27:14.983 "aliases": [ 00:27:14.983 "lvs/nvme0n1p0" 00:27:14.983 ], 00:27:14.983 "product_name": "Logical Volume", 00:27:14.983 "block_size": 4096, 00:27:14.983 "num_blocks": 26476544, 00:27:14.983 "uuid": "060c9d3b-67f6-4437-877a-e741914f1143", 00:27:14.983 "assigned_rate_limits": { 00:27:14.983 "rw_ios_per_sec": 0, 00:27:14.983 "rw_mbytes_per_sec": 0, 00:27:14.983 "r_mbytes_per_sec": 0, 00:27:14.983 "w_mbytes_per_sec": 0 00:27:14.983 }, 00:27:14.983 "claimed": false, 00:27:14.983 "zoned": false, 00:27:14.983 "supported_io_types": { 00:27:14.983 "read": true, 00:27:14.983 "write": true, 00:27:14.983 "unmap": true, 00:27:14.983 "flush": false, 00:27:14.983 "reset": true, 00:27:14.983 "nvme_admin": false, 00:27:14.983 "nvme_io": false, 00:27:14.983 "nvme_io_md": false, 00:27:14.983 "write_zeroes": true, 00:27:14.983 "zcopy": false, 00:27:14.983 "get_zone_info": false, 00:27:14.983 "zone_management": false, 00:27:14.983 "zone_append": false, 00:27:14.983 "compare": false, 00:27:14.983 "compare_and_write": false, 00:27:14.983 "abort": false, 00:27:14.983 "seek_hole": true, 00:27:14.983 "seek_data": true, 00:27:14.983 "copy": false, 00:27:14.983 "nvme_iov_md": false 00:27:14.983 }, 00:27:14.983 "driver_specific": { 00:27:14.983 "lvol": { 00:27:14.983 "lvol_store_uuid": "dd70bdd4-aa90-4f31-9eb7-498c14de76da", 00:27:14.983 "base_bdev": "nvme0n1", 00:27:14.983 "thin_provision": true, 00:27:14.983 "num_allocated_clusters": 0, 00:27:14.983 "snapshot": false, 00:27:14.983 "clone": false, 00:27:14.983 "esnap_clone": false 00:27:14.983 } 00:27:14.983 } 00:27:14.983 } 00:27:14.983 ]' 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:27:14.983 04:11:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 060c9d3b-67f6-4437-877a-e741914f1143 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=060c9d3b-67f6-4437-877a-e741914f1143 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:27:15.243 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 060c9d3b-67f6-4437-877a-e741914f1143 00:27:15.503 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:27:15.503 { 00:27:15.503 "name": "060c9d3b-67f6-4437-877a-e741914f1143", 00:27:15.503 "aliases": [ 00:27:15.503 "lvs/nvme0n1p0" 00:27:15.503 ], 00:27:15.503 "product_name": "Logical Volume", 00:27:15.503 "block_size": 4096, 00:27:15.503 "num_blocks": 26476544, 00:27:15.503 "uuid": "060c9d3b-67f6-4437-877a-e741914f1143", 00:27:15.504 "assigned_rate_limits": { 00:27:15.504 "rw_ios_per_sec": 0, 00:27:15.504 "rw_mbytes_per_sec": 0, 00:27:15.504 "r_mbytes_per_sec": 0, 00:27:15.504 "w_mbytes_per_sec": 0 00:27:15.504 }, 00:27:15.504 "claimed": false, 00:27:15.504 "zoned": false, 00:27:15.504 "supported_io_types": { 00:27:15.504 "read": true, 00:27:15.504 "write": true, 00:27:15.504 "unmap": true, 00:27:15.504 "flush": false, 00:27:15.504 "reset": true, 00:27:15.504 "nvme_admin": false, 00:27:15.504 "nvme_io": false, 00:27:15.504 "nvme_io_md": false, 00:27:15.504 "write_zeroes": true, 00:27:15.504 "zcopy": false, 00:27:15.504 "get_zone_info": false, 00:27:15.504 "zone_management": false, 00:27:15.504 "zone_append": false, 00:27:15.504 "compare": false, 00:27:15.504 "compare_and_write": false, 00:27:15.504 "abort": false, 00:27:15.504 "seek_hole": true, 00:27:15.504 "seek_data": true, 00:27:15.504 "copy": false, 00:27:15.504 "nvme_iov_md": false 00:27:15.504 }, 00:27:15.504 "driver_specific": { 00:27:15.504 "lvol": { 00:27:15.504 "lvol_store_uuid": "dd70bdd4-aa90-4f31-9eb7-498c14de76da", 00:27:15.504 "base_bdev": "nvme0n1", 00:27:15.504 "thin_provision": true, 00:27:15.504 "num_allocated_clusters": 0, 00:27:15.504 "snapshot": false, 00:27:15.504 "clone": false, 00:27:15.504 "esnap_clone": false 00:27:15.504 } 00:27:15.504 } 00:27:15.504 } 00:27:15.504 ]' 00:27:15.504 04:11:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 060c9d3b-67f6-4437-877a-e741914f1143 --l2p_dram_limit 10' 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:27:15.504 04:11:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 060c9d3b-67f6-4437-877a-e741914f1143 --l2p_dram_limit 10 -c nvc0n1p0 00:27:15.765 [2024-12-07 04:11:58.246744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.246796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:15.765 [2024-12-07 04:11:58.246814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:15.765 [2024-12-07 04:11:58.246841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.246908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.246920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:15.765 [2024-12-07 04:11:58.246934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:27:15.765 [2024-12-07 04:11:58.246960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.246991] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:15.765 [2024-12-07 04:11:58.247994] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:15.765 [2024-12-07 04:11:58.248029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.248040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:15.765 [2024-12-07 04:11:58.248054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.047 ms 00:27:15.765 [2024-12-07 04:11:58.248064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.248144] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 90ccfb34-21a9-40df-a0bb-b5a05cb6ac2e 00:27:15.765 [2024-12-07 04:11:58.249610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.249638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:27:15.765 [2024-12-07 04:11:58.249649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:15.765 [2024-12-07 04:11:58.249661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.257547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.257713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:15.765 [2024-12-07 04:11:58.257853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.843 ms 00:27:15.765 [2024-12-07 04:11:58.257895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.258050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.258235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:15.765 [2024-12-07 04:11:58.258339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:27:15.765 [2024-12-07 04:11:58.258379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.258462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.258501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:15.765 [2024-12-07 04:11:58.258535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:15.765 [2024-12-07 04:11:58.258568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.258615] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:15.765 [2024-12-07 04:11:58.265920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.265958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:15.765 [2024-12-07 04:11:58.265974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.321 ms 00:27:15.765 [2024-12-07 04:11:58.265984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.765 [2024-12-07 04:11:58.266025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.765 [2024-12-07 04:11:58.266035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:15.765 [2024-12-07 04:11:58.266048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:15.766 [2024-12-07 04:11:58.266057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.766 [2024-12-07 04:11:58.266100] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:27:15.766 [2024-12-07 04:11:58.266224] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:15.766 [2024-12-07 04:11:58.266242] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:15.766 [2024-12-07 04:11:58.266254] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:15.766 [2024-12-07 04:11:58.266269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266280] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266302] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:15.766 [2024-12-07 04:11:58.266328] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:15.766 [2024-12-07 04:11:58.266345] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:15.766 [2024-12-07 04:11:58.266355] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:15.766 [2024-12-07 04:11:58.266368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.766 [2024-12-07 04:11:58.266387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:15.766 [2024-12-07 04:11:58.266401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:27:15.766 [2024-12-07 04:11:58.266411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.766 [2024-12-07 04:11:58.266487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.766 [2024-12-07 04:11:58.266498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:15.766 [2024-12-07 04:11:58.266510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:15.766 [2024-12-07 04:11:58.266519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.766 [2024-12-07 04:11:58.266614] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:15.766 [2024-12-07 04:11:58.266627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:15.766 [2024-12-07 04:11:58.266640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:15.766 [2024-12-07 04:11:58.266672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:15.766 [2024-12-07 04:11:58.266705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.766 [2024-12-07 04:11:58.266727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:15.766 [2024-12-07 04:11:58.266737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:15.766 [2024-12-07 04:11:58.266748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:15.766 [2024-12-07 04:11:58.266759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:15.766 [2024-12-07 04:11:58.266770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:15.766 [2024-12-07 04:11:58.266779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:15.766 [2024-12-07 04:11:58.266802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266822] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:15.766 [2024-12-07 04:11:58.266834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:15.766 [2024-12-07 04:11:58.266863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:15.766 [2024-12-07 04:11:58.266894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.766 [2024-12-07 04:11:58.266915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:15.766 [2024-12-07 04:11:58.266924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:15.766 [2024-12-07 04:11:58.266935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:15.766 [2024-12-07 04:11:58.267129] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:15.766 [2024-12-07 04:11:58.267183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:15.766 [2024-12-07 04:11:58.267216] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.766 [2024-12-07 04:11:58.267249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:15.766 [2024-12-07 04:11:58.267279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:15.766 [2024-12-07 04:11:58.267313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:15.766 [2024-12-07 04:11:58.267343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:15.766 [2024-12-07 04:11:58.267374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:15.766 [2024-12-07 04:11:58.267404] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.267498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:15.766 [2024-12-07 04:11:58.267534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:15.766 [2024-12-07 04:11:58.267566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.267596] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:15.766 [2024-12-07 04:11:58.267629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:15.766 [2024-12-07 04:11:58.267660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:15.766 [2024-12-07 04:11:58.267693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:15.766 [2024-12-07 04:11:58.267772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:15.766 [2024-12-07 04:11:58.267813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:15.766 [2024-12-07 04:11:58.267843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:15.766 [2024-12-07 04:11:58.267876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:15.766 [2024-12-07 04:11:58.267905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:15.766 [2024-12-07 04:11:58.267955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:15.766 [2024-12-07 04:11:58.267992] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:15.766 [2024-12-07 04:11:58.268152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:15.766 [2024-12-07 04:11:58.268253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:15.766 [2024-12-07 04:11:58.268408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:15.766 [2024-12-07 04:11:58.268467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:15.766 [2024-12-07 04:11:58.268480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:15.766 [2024-12-07 04:11:58.268493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:15.766 [2024-12-07 04:11:58.268504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:15.766 [2024-12-07 04:11:58.268519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:15.766 [2024-12-07 04:11:58.268530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:15.766 [2024-12-07 04:11:58.268546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:15.766 [2024-12-07 04:11:58.268603] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:15.766 [2024-12-07 04:11:58.268617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:15.766 [2024-12-07 04:11:58.268642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:15.766 [2024-12-07 04:11:58.268653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:15.766 [2024-12-07 04:11:58.268666] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:15.766 [2024-12-07 04:11:58.268678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:15.766 [2024-12-07 04:11:58.268692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:15.766 [2024-12-07 04:11:58.268705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.124 ms 00:27:15.766 [2024-12-07 04:11:58.268717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:15.767 [2024-12-07 04:11:58.268777] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:27:15.767 [2024-12-07 04:11:58.268796] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:27:19.980 [2024-12-07 04:12:01.906479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.980 [2024-12-07 04:12:01.906746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:27:19.980 [2024-12-07 04:12:01.906773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3643.608 ms 00:27:19.980 [2024-12-07 04:12:01.906787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.980 [2024-12-07 04:12:01.946574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.980 [2024-12-07 04:12:01.946627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:19.980 [2024-12-07 04:12:01.946644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.508 ms 00:27:19.980 [2024-12-07 04:12:01.946657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.980 [2024-12-07 04:12:01.946784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.980 [2024-12-07 04:12:01.946801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:19.980 [2024-12-07 04:12:01.946812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:19.980 [2024-12-07 04:12:01.946832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.980 [2024-12-07 04:12:01.994208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.980 [2024-12-07 04:12:01.994252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:19.980 [2024-12-07 04:12:01.994268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.394 ms 00:27:19.980 [2024-12-07 04:12:01.994281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.980 [2024-12-07 04:12:01.994351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.980 [2024-12-07 04:12:01.994371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:19.980 [2024-12-07 04:12:01.994382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:19.980 [2024-12-07 04:12:01.994407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.980 [2024-12-07 04:12:01.994885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.980 [2024-12-07 04:12:01.994903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:19.980 [2024-12-07 04:12:01.994914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:27:19.980 [2024-12-07 04:12:01.994927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:01.995047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:01.995063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:19.981 [2024-12-07 04:12:01.995077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:19.981 [2024-12-07 04:12:01.995092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.016068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.016107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:19.981 [2024-12-07 04:12:02.016121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.988 ms 00:27:19.981 [2024-12-07 04:12:02.016150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.054619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:19.981 [2024-12-07 04:12:02.058634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.058827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:19.981 [2024-12-07 04:12:02.058864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.468 ms 00:27:19.981 [2024-12-07 04:12:02.058879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.153766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.153827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:27:19.981 [2024-12-07 04:12:02.153863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.988 ms 00:27:19.981 [2024-12-07 04:12:02.153875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.154069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.154087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:19.981 [2024-12-07 04:12:02.154120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.146 ms 00:27:19.981 [2024-12-07 04:12:02.154131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.191104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.191144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:27:19.981 [2024-12-07 04:12:02.191163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.978 ms 00:27:19.981 [2024-12-07 04:12:02.191173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.230583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.230750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:27:19.981 [2024-12-07 04:12:02.230778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.423 ms 00:27:19.981 [2024-12-07 04:12:02.230789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.231532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.231553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:19.981 [2024-12-07 04:12:02.231568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:27:19.981 [2024-12-07 04:12:02.231581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.332098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.332156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:27:19.981 [2024-12-07 04:12:02.332178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.618 ms 00:27:19.981 [2024-12-07 04:12:02.332190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.370274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.370318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:27:19.981 [2024-12-07 04:12:02.370335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.060 ms 00:27:19.981 [2024-12-07 04:12:02.370361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.406006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.406041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:27:19.981 [2024-12-07 04:12:02.406058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.655 ms 00:27:19.981 [2024-12-07 04:12:02.406067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.442373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.442410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:19.981 [2024-12-07 04:12:02.442426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.318 ms 00:27:19.981 [2024-12-07 04:12:02.442436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.442485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.442497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:19.981 [2024-12-07 04:12:02.442513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:19.981 [2024-12-07 04:12:02.442523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.442621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:19.981 [2024-12-07 04:12:02.442636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:19.981 [2024-12-07 04:12:02.442665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:19.981 [2024-12-07 04:12:02.442675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:19.981 [2024-12-07 04:12:02.443705] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4203.331 ms, result 0 00:27:19.981 { 00:27:19.981 "name": "ftl0", 00:27:19.981 "uuid": "90ccfb34-21a9-40df-a0bb-b5a05cb6ac2e" 00:27:19.981 } 00:27:19.981 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:27:19.981 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:27:19.981 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:27:19.981 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:27:19.981 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:27:20.241 /dev/nbd0 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:27:20.241 1+0 records in 00:27:20.241 1+0 records out 00:27:20.241 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000344151 s, 11.9 MB/s 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:27:20.241 04:12:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:27:20.501 [2024-12-07 04:12:03.054132] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:27:20.502 [2024-12-07 04:12:03.054244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81266 ] 00:27:20.782 [2024-12-07 04:12:03.240374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.782 [2024-12-07 04:12:03.351788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.163  [2024-12-07T04:12:05.834Z] Copying: 208/1024 [MB] (208 MBps) [2024-12-07T04:12:06.772Z] Copying: 413/1024 [MB] (205 MBps) [2024-12-07T04:12:07.708Z] Copying: 611/1024 [MB] (197 MBps) [2024-12-07T04:12:09.088Z] Copying: 803/1024 [MB] (191 MBps) [2024-12-07T04:12:09.088Z] Copying: 987/1024 [MB] (183 MBps) [2024-12-07T04:12:10.468Z] Copying: 1024/1024 [MB] (average 197 MBps) 00:27:27.732 00:27:27.732 04:12:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:29.112 04:12:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:27:29.112 [2024-12-07 04:12:11.827954] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:27:29.112 [2024-12-07 04:12:11.828084] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81360 ] 00:27:29.374 [2024-12-07 04:12:12.013100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.705 [2024-12-07 04:12:12.135718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:31.104  [2024-12-07T04:12:14.777Z] Copying: 16/1024 [MB] (16 MBps) [2024-12-07T04:12:15.713Z] Copying: 33/1024 [MB] (16 MBps) [2024-12-07T04:12:16.649Z] Copying: 49/1024 [MB] (16 MBps) [2024-12-07T04:12:17.587Z] Copying: 67/1024 [MB] (17 MBps) [2024-12-07T04:12:18.525Z] Copying: 84/1024 [MB] (17 MBps) [2024-12-07T04:12:19.906Z] Copying: 101/1024 [MB] (17 MBps) [2024-12-07T04:12:20.845Z] Copying: 118/1024 [MB] (17 MBps) [2024-12-07T04:12:21.784Z] Copying: 136/1024 [MB] (17 MBps) [2024-12-07T04:12:22.722Z] Copying: 153/1024 [MB] (16 MBps) [2024-12-07T04:12:23.661Z] Copying: 170/1024 [MB] (17 MBps) [2024-12-07T04:12:24.598Z] Copying: 188/1024 [MB] (17 MBps) [2024-12-07T04:12:25.536Z] Copying: 205/1024 [MB] (17 MBps) [2024-12-07T04:12:26.910Z] Copying: 223/1024 [MB] (17 MBps) [2024-12-07T04:12:27.479Z] Copying: 239/1024 [MB] (16 MBps) [2024-12-07T04:12:28.854Z] Copying: 256/1024 [MB] (17 MBps) [2024-12-07T04:12:29.793Z] Copying: 273/1024 [MB] (17 MBps) [2024-12-07T04:12:30.733Z] Copying: 290/1024 [MB] (17 MBps) [2024-12-07T04:12:31.674Z] Copying: 307/1024 [MB] (17 MBps) [2024-12-07T04:12:32.613Z] Copying: 325/1024 [MB] (17 MBps) [2024-12-07T04:12:33.549Z] Copying: 342/1024 [MB] (17 MBps) [2024-12-07T04:12:34.485Z] Copying: 360/1024 [MB] (17 MBps) [2024-12-07T04:12:35.862Z] Copying: 377/1024 [MB] (17 MBps) [2024-12-07T04:12:36.800Z] Copying: 394/1024 [MB] (17 MBps) [2024-12-07T04:12:37.737Z] Copying: 411/1024 [MB] (16 MBps) [2024-12-07T04:12:38.693Z] Copying: 428/1024 [MB] (17 MBps) [2024-12-07T04:12:39.633Z] Copying: 445/1024 [MB] (17 MBps) [2024-12-07T04:12:40.572Z] Copying: 462/1024 [MB] (17 MBps) [2024-12-07T04:12:41.635Z] Copying: 480/1024 [MB] (17 MBps) [2024-12-07T04:12:42.572Z] Copying: 497/1024 [MB] (17 MBps) [2024-12-07T04:12:43.506Z] Copying: 514/1024 [MB] (17 MBps) [2024-12-07T04:12:44.471Z] Copying: 532/1024 [MB] (17 MBps) [2024-12-07T04:12:45.847Z] Copying: 549/1024 [MB] (16 MBps) [2024-12-07T04:12:46.784Z] Copying: 566/1024 [MB] (17 MBps) [2024-12-07T04:12:47.725Z] Copying: 584/1024 [MB] (17 MBps) [2024-12-07T04:12:48.663Z] Copying: 601/1024 [MB] (17 MBps) [2024-12-07T04:12:49.601Z] Copying: 617/1024 [MB] (16 MBps) [2024-12-07T04:12:50.541Z] Copying: 634/1024 [MB] (16 MBps) [2024-12-07T04:12:51.480Z] Copying: 650/1024 [MB] (16 MBps) [2024-12-07T04:12:52.861Z] Copying: 667/1024 [MB] (17 MBps) [2024-12-07T04:12:53.432Z] Copying: 684/1024 [MB] (16 MBps) [2024-12-07T04:12:54.813Z] Copying: 701/1024 [MB] (17 MBps) [2024-12-07T04:12:55.752Z] Copying: 718/1024 [MB] (16 MBps) [2024-12-07T04:12:56.689Z] Copying: 735/1024 [MB] (16 MBps) [2024-12-07T04:12:57.625Z] Copying: 751/1024 [MB] (16 MBps) [2024-12-07T04:12:58.565Z] Copying: 768/1024 [MB] (17 MBps) [2024-12-07T04:12:59.504Z] Copying: 785/1024 [MB] (17 MBps) [2024-12-07T04:13:00.445Z] Copying: 802/1024 [MB] (17 MBps) [2024-12-07T04:13:01.823Z] Copying: 820/1024 [MB] (17 MBps) [2024-12-07T04:13:02.760Z] Copying: 837/1024 [MB] (17 MBps) [2024-12-07T04:13:03.697Z] Copying: 854/1024 [MB] (17 MBps) [2024-12-07T04:13:04.634Z] Copying: 872/1024 [MB] (17 MBps) [2024-12-07T04:13:05.569Z] Copying: 889/1024 [MB] (16 MBps) [2024-12-07T04:13:06.503Z] Copying: 905/1024 [MB] (16 MBps) [2024-12-07T04:13:07.444Z] Copying: 922/1024 [MB] (16 MBps) [2024-12-07T04:13:08.824Z] Copying: 939/1024 [MB] (17 MBps) [2024-12-07T04:13:09.763Z] Copying: 956/1024 [MB] (17 MBps) [2024-12-07T04:13:10.730Z] Copying: 973/1024 [MB] (17 MBps) [2024-12-07T04:13:11.668Z] Copying: 990/1024 [MB] (16 MBps) [2024-12-07T04:13:12.606Z] Copying: 1007/1024 [MB] (16 MBps) [2024-12-07T04:13:13.986Z] Copying: 1024/1024 [MB] (average 17 MBps) 00:28:31.250 00:28:31.250 04:13:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:28:31.250 04:13:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:28:31.250 04:13:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:28:31.509 [2024-12-07 04:13:14.024375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.509 [2024-12-07 04:13:14.024431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:31.509 [2024-12-07 04:13:14.024464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:31.509 [2024-12-07 04:13:14.024478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.024506] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:31.510 [2024-12-07 04:13:14.028797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.028834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:31.510 [2024-12-07 04:13:14.028850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:28:31.510 [2024-12-07 04:13:14.028861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.030980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.031020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:31.510 [2024-12-07 04:13:14.031036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:28:31.510 [2024-12-07 04:13:14.031052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.049273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.049311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:31.510 [2024-12-07 04:13:14.049326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.221 ms 00:28:31.510 [2024-12-07 04:13:14.049353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.054365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.054399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:31.510 [2024-12-07 04:13:14.054413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.977 ms 00:28:31.510 [2024-12-07 04:13:14.054423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.089537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.089576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:31.510 [2024-12-07 04:13:14.089593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.088 ms 00:28:31.510 [2024-12-07 04:13:14.089602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.110771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.110808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:31.510 [2024-12-07 04:13:14.110828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.156 ms 00:28:31.510 [2024-12-07 04:13:14.110844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.111029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.111044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:31.510 [2024-12-07 04:13:14.111059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:28:31.510 [2024-12-07 04:13:14.111069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.146165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.146200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:31.510 [2024-12-07 04:13:14.146215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.126 ms 00:28:31.510 [2024-12-07 04:13:14.146224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.180356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.180497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:31.510 [2024-12-07 04:13:14.180539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.141 ms 00:28:31.510 [2024-12-07 04:13:14.180549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.510 [2024-12-07 04:13:14.213829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.510 [2024-12-07 04:13:14.213864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:31.510 [2024-12-07 04:13:14.213880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.287 ms 00:28:31.510 [2024-12-07 04:13:14.213905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.770 [2024-12-07 04:13:14.248691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.770 [2024-12-07 04:13:14.248737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:31.770 [2024-12-07 04:13:14.248753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.729 ms 00:28:31.770 [2024-12-07 04:13:14.248763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.770 [2024-12-07 04:13:14.248805] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:31.770 [2024-12-07 04:13:14.248827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.248998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:31.770 [2024-12-07 04:13:14.249339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.249996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.250008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.250018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.250032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.250042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.250054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:31.771 [2024-12-07 04:13:14.250070] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:31.771 [2024-12-07 04:13:14.250081] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90ccfb34-21a9-40df-a0bb-b5a05cb6ac2e 00:28:31.771 [2024-12-07 04:13:14.250092] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:28:31.771 [2024-12-07 04:13:14.250106] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:28:31.771 [2024-12-07 04:13:14.250118] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:28:31.771 [2024-12-07 04:13:14.250130] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:28:31.771 [2024-12-07 04:13:14.250139] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:31.771 [2024-12-07 04:13:14.250151] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:31.771 [2024-12-07 04:13:14.250160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:31.771 [2024-12-07 04:13:14.250171] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:31.771 [2024-12-07 04:13:14.250179] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:31.771 [2024-12-07 04:13:14.250191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.771 [2024-12-07 04:13:14.250200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:31.771 [2024-12-07 04:13:14.250212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.389 ms 00:28:31.771 [2024-12-07 04:13:14.250223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.771 [2024-12-07 04:13:14.269547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.771 [2024-12-07 04:13:14.269582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:31.772 [2024-12-07 04:13:14.269597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.303 ms 00:28:31.772 [2024-12-07 04:13:14.269607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.772 [2024-12-07 04:13:14.270110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:31.772 [2024-12-07 04:13:14.270122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:31.772 [2024-12-07 04:13:14.270135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:28:31.772 [2024-12-07 04:13:14.270144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.772 [2024-12-07 04:13:14.332199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:31.772 [2024-12-07 04:13:14.332233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:31.772 [2024-12-07 04:13:14.332249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:31.772 [2024-12-07 04:13:14.332275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.772 [2024-12-07 04:13:14.332328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:31.772 [2024-12-07 04:13:14.332339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:31.772 [2024-12-07 04:13:14.332352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:31.772 [2024-12-07 04:13:14.332361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.772 [2024-12-07 04:13:14.332455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:31.772 [2024-12-07 04:13:14.332471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:31.772 [2024-12-07 04:13:14.332485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:31.772 [2024-12-07 04:13:14.332494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.772 [2024-12-07 04:13:14.332525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:31.772 [2024-12-07 04:13:14.332535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:31.772 [2024-12-07 04:13:14.332548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:31.772 [2024-12-07 04:13:14.332557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:31.772 [2024-12-07 04:13:14.447715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:31.772 [2024-12-07 04:13:14.447762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:31.772 [2024-12-07 04:13:14.447779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:31.772 [2024-12-07 04:13:14.447789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:32.031 [2024-12-07 04:13:14.544276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:32.031 [2024-12-07 04:13:14.544440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:32.031 [2024-12-07 04:13:14.544532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:32.031 [2024-12-07 04:13:14.544679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:32.031 [2024-12-07 04:13:14.544758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:32.031 [2024-12-07 04:13:14.544841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.544903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:32.031 [2024-12-07 04:13:14.544915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:32.031 [2024-12-07 04:13:14.544946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:32.031 [2024-12-07 04:13:14.544957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:32.031 [2024-12-07 04:13:14.545093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.530 ms, result 0 00:28:32.031 true 00:28:32.031 04:13:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81117 00:28:32.031 04:13:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81117 00:28:32.031 04:13:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:28:32.031 [2024-12-07 04:13:14.672946] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:28:32.031 [2024-12-07 04:13:14.673076] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82003 ] 00:28:32.290 [2024-12-07 04:13:14.856830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.290 [2024-12-07 04:13:14.971898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.674  [2024-12-07T04:13:17.347Z] Copying: 212/1024 [MB] (212 MBps) [2024-12-07T04:13:18.289Z] Copying: 426/1024 [MB] (213 MBps) [2024-12-07T04:13:19.670Z] Copying: 642/1024 [MB] (216 MBps) [2024-12-07T04:13:20.239Z] Copying: 855/1024 [MB] (212 MBps) [2024-12-07T04:13:21.178Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:28:38.442 00:28:38.442 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81117 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:28:38.702 04:13:21 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:38.702 [2024-12-07 04:13:21.266493] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:28:38.702 [2024-12-07 04:13:21.266619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82074 ] 00:28:38.970 [2024-12-07 04:13:21.444179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:38.970 [2024-12-07 04:13:21.550595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.229 [2024-12-07 04:13:21.899716] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:39.229 [2024-12-07 04:13:21.899968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:39.488 [2024-12-07 04:13:21.965672] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:28:39.488 [2024-12-07 04:13:21.966012] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:28:39.488 [2024-12-07 04:13:21.966228] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:28:39.747 [2024-12-07 04:13:22.279097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.747 [2024-12-07 04:13:22.279144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:39.748 [2024-12-07 04:13:22.279159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:39.748 [2024-12-07 04:13:22.279174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.279221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.279233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:39.748 [2024-12-07 04:13:22.279244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:39.748 [2024-12-07 04:13:22.279253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.279274] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:39.748 [2024-12-07 04:13:22.280266] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:39.748 [2024-12-07 04:13:22.280288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.280299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:39.748 [2024-12-07 04:13:22.280310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:28:39.748 [2024-12-07 04:13:22.280320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.281768] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:39.748 [2024-12-07 04:13:22.300170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.300344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:39.748 [2024-12-07 04:13:22.300367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.433 ms 00:28:39.748 [2024-12-07 04:13:22.300378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.300466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.300480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:39.748 [2024-12-07 04:13:22.300491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:39.748 [2024-12-07 04:13:22.300501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.307445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.307611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:39.748 [2024-12-07 04:13:22.307631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.882 ms 00:28:39.748 [2024-12-07 04:13:22.307642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.307726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.307739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:39.748 [2024-12-07 04:13:22.307750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:39.748 [2024-12-07 04:13:22.307760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.307805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.307817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:39.748 [2024-12-07 04:13:22.307828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:39.748 [2024-12-07 04:13:22.307837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.307861] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:39.748 [2024-12-07 04:13:22.312627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.312657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:39.748 [2024-12-07 04:13:22.312669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.778 ms 00:28:39.748 [2024-12-07 04:13:22.312678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.312727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.312738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:39.748 [2024-12-07 04:13:22.312749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:39.748 [2024-12-07 04:13:22.312759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.312813] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:39.748 [2024-12-07 04:13:22.312838] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:39.748 [2024-12-07 04:13:22.312872] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:39.748 [2024-12-07 04:13:22.312890] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:39.748 [2024-12-07 04:13:22.312994] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:39.748 [2024-12-07 04:13:22.313008] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:39.748 [2024-12-07 04:13:22.313021] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:39.748 [2024-12-07 04:13:22.313037] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313049] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313060] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:39.748 [2024-12-07 04:13:22.313070] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:39.748 [2024-12-07 04:13:22.313095] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:39.748 [2024-12-07 04:13:22.313105] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:39.748 [2024-12-07 04:13:22.313116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.313126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:39.748 [2024-12-07 04:13:22.313137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:28:39.748 [2024-12-07 04:13:22.313147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.313218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.748 [2024-12-07 04:13:22.313233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:39.748 [2024-12-07 04:13:22.313243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:28:39.748 [2024-12-07 04:13:22.313253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.748 [2024-12-07 04:13:22.313344] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:39.748 [2024-12-07 04:13:22.313357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:39.748 [2024-12-07 04:13:22.313368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:39.748 [2024-12-07 04:13:22.313414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:39.748 [2024-12-07 04:13:22.313442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.748 [2024-12-07 04:13:22.313470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:39.748 [2024-12-07 04:13:22.313480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:39.748 [2024-12-07 04:13:22.313489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:39.748 [2024-12-07 04:13:22.313498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:39.748 [2024-12-07 04:13:22.313508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:39.748 [2024-12-07 04:13:22.313517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:39.748 [2024-12-07 04:13:22.313536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:39.748 [2024-12-07 04:13:22.313565] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:39.748 [2024-12-07 04:13:22.313592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313601] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:39.748 [2024-12-07 04:13:22.313619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:39.748 [2024-12-07 04:13:22.313646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:39.748 [2024-12-07 04:13:22.313664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:39.748 [2024-12-07 04:13:22.313673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.748 [2024-12-07 04:13:22.313691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:39.748 [2024-12-07 04:13:22.313699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:39.748 [2024-12-07 04:13:22.313708] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:39.748 [2024-12-07 04:13:22.313717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:39.748 [2024-12-07 04:13:22.313726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:39.748 [2024-12-07 04:13:22.313735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.748 [2024-12-07 04:13:22.313744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:39.748 [2024-12-07 04:13:22.313753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:39.749 [2024-12-07 04:13:22.313763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.749 [2024-12-07 04:13:22.313772] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:39.749 [2024-12-07 04:13:22.313782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:39.749 [2024-12-07 04:13:22.313794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:39.749 [2024-12-07 04:13:22.313804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:39.749 [2024-12-07 04:13:22.313814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:39.749 [2024-12-07 04:13:22.313823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:39.749 [2024-12-07 04:13:22.313833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:39.749 [2024-12-07 04:13:22.313842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:39.749 [2024-12-07 04:13:22.313851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:39.749 [2024-12-07 04:13:22.313860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:39.749 [2024-12-07 04:13:22.313871] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:39.749 [2024-12-07 04:13:22.313883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.313895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:39.749 [2024-12-07 04:13:22.313905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:39.749 [2024-12-07 04:13:22.313916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:39.749 [2024-12-07 04:13:22.313926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:39.749 [2024-12-07 04:13:22.313936] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:39.749 [2024-12-07 04:13:22.313959] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:39.749 [2024-12-07 04:13:22.313970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:39.749 [2024-12-07 04:13:22.313980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:39.749 [2024-12-07 04:13:22.313990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:39.749 [2024-12-07 04:13:22.314001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.314011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.314021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.314031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.314042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:39.749 [2024-12-07 04:13:22.314051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:39.749 [2024-12-07 04:13:22.314064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.314075] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:39.749 [2024-12-07 04:13:22.314085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:39.749 [2024-12-07 04:13:22.314095] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:39.749 [2024-12-07 04:13:22.314106] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:39.749 [2024-12-07 04:13:22.314118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.314128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:39.749 [2024-12-07 04:13:22.314138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:28:39.749 [2024-12-07 04:13:22.314147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.353232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.353278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:39.749 [2024-12-07 04:13:22.353297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.102 ms 00:28:39.749 [2024-12-07 04:13:22.353313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.353403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.353420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:39.749 [2024-12-07 04:13:22.353436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:28:39.749 [2024-12-07 04:13:22.353451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.413825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.413872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:39.749 [2024-12-07 04:13:22.413891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.399 ms 00:28:39.749 [2024-12-07 04:13:22.413902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.413963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.413974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:39.749 [2024-12-07 04:13:22.413985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:39.749 [2024-12-07 04:13:22.413994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.414563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.414586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:39.749 [2024-12-07 04:13:22.414597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:28:39.749 [2024-12-07 04:13:22.414614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.414737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.414751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:39.749 [2024-12-07 04:13:22.414762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:28:39.749 [2024-12-07 04:13:22.414772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.433994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.434034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:39.749 [2024-12-07 04:13:22.434048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.232 ms 00:28:39.749 [2024-12-07 04:13:22.434075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:39.749 [2024-12-07 04:13:22.453438] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:39.749 [2024-12-07 04:13:22.453479] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:39.749 [2024-12-07 04:13:22.453495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:39.749 [2024-12-07 04:13:22.453506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:39.749 [2024-12-07 04:13:22.453518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.330 ms 00:28:39.749 [2024-12-07 04:13:22.453528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.483004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.483046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:40.009 [2024-12-07 04:13:22.483060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.477 ms 00:28:40.009 [2024-12-07 04:13:22.483071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.501719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.501977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:40.009 [2024-12-07 04:13:22.502016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.617 ms 00:28:40.009 [2024-12-07 04:13:22.502036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.520300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.520337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:40.009 [2024-12-07 04:13:22.520350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.232 ms 00:28:40.009 [2024-12-07 04:13:22.520375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.521123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.521155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:40.009 [2024-12-07 04:13:22.521167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:28:40.009 [2024-12-07 04:13:22.521177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.605037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.605104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:40.009 [2024-12-07 04:13:22.605120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.971 ms 00:28:40.009 [2024-12-07 04:13:22.605147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.615473] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:40.009 [2024-12-07 04:13:22.618482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.618511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:40.009 [2024-12-07 04:13:22.618524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.276 ms 00:28:40.009 [2024-12-07 04:13:22.618554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.618648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.618662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:40.009 [2024-12-07 04:13:22.618673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:40.009 [2024-12-07 04:13:22.618683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.618758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.618770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:40.009 [2024-12-07 04:13:22.618781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:28:40.009 [2024-12-07 04:13:22.618790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.618815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.618826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:40.009 [2024-12-07 04:13:22.618836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:40.009 [2024-12-07 04:13:22.618846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.618880] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:40.009 [2024-12-07 04:13:22.618893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.618903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:40.009 [2024-12-07 04:13:22.618913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:28:40.009 [2024-12-07 04:13:22.618926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.655391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.655436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:40.009 [2024-12-07 04:13:22.655451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.488 ms 00:28:40.009 [2024-12-07 04:13:22.655461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.655534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:40.009 [2024-12-07 04:13:22.655546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:40.009 [2024-12-07 04:13:22.655557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:28:40.009 [2024-12-07 04:13:22.655567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:40.009 [2024-12-07 04:13:22.656669] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 377.761 ms, result 0 00:28:40.947  [2024-12-07T04:13:25.059Z] Copying: 25/1024 [MB] (25 MBps) [2024-12-07T04:13:25.991Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-07T04:13:26.926Z] Copying: 75/1024 [MB] (25 MBps) [2024-12-07T04:13:27.862Z] Copying: 100/1024 [MB] (24 MBps) [2024-12-07T04:13:28.797Z] Copying: 126/1024 [MB] (25 MBps) [2024-12-07T04:13:29.732Z] Copying: 151/1024 [MB] (24 MBps) [2024-12-07T04:13:30.668Z] Copying: 176/1024 [MB] (25 MBps) [2024-12-07T04:13:32.047Z] Copying: 202/1024 [MB] (25 MBps) [2024-12-07T04:13:32.986Z] Copying: 227/1024 [MB] (24 MBps) [2024-12-07T04:13:33.925Z] Copying: 252/1024 [MB] (24 MBps) [2024-12-07T04:13:34.863Z] Copying: 276/1024 [MB] (24 MBps) [2024-12-07T04:13:35.799Z] Copying: 300/1024 [MB] (24 MBps) [2024-12-07T04:13:36.733Z] Copying: 324/1024 [MB] (24 MBps) [2024-12-07T04:13:37.669Z] Copying: 346/1024 [MB] (21 MBps) [2024-12-07T04:13:38.690Z] Copying: 371/1024 [MB] (24 MBps) [2024-12-07T04:13:40.069Z] Copying: 396/1024 [MB] (25 MBps) [2024-12-07T04:13:41.005Z] Copying: 419/1024 [MB] (23 MBps) [2024-12-07T04:13:41.941Z] Copying: 444/1024 [MB] (24 MBps) [2024-12-07T04:13:42.995Z] Copying: 468/1024 [MB] (24 MBps) [2024-12-07T04:13:43.931Z] Copying: 493/1024 [MB] (24 MBps) [2024-12-07T04:13:44.868Z] Copying: 517/1024 [MB] (24 MBps) [2024-12-07T04:13:45.803Z] Copying: 541/1024 [MB] (24 MBps) [2024-12-07T04:13:46.738Z] Copying: 566/1024 [MB] (24 MBps) [2024-12-07T04:13:47.672Z] Copying: 590/1024 [MB] (24 MBps) [2024-12-07T04:13:49.046Z] Copying: 614/1024 [MB] (23 MBps) [2024-12-07T04:13:49.979Z] Copying: 637/1024 [MB] (23 MBps) [2024-12-07T04:13:50.916Z] Copying: 661/1024 [MB] (23 MBps) [2024-12-07T04:13:51.851Z] Copying: 684/1024 [MB] (22 MBps) [2024-12-07T04:13:52.784Z] Copying: 706/1024 [MB] (22 MBps) [2024-12-07T04:13:53.721Z] Copying: 731/1024 [MB] (24 MBps) [2024-12-07T04:13:54.658Z] Copying: 753/1024 [MB] (22 MBps) [2024-12-07T04:13:56.033Z] Copying: 777/1024 [MB] (23 MBps) [2024-12-07T04:13:56.969Z] Copying: 801/1024 [MB] (24 MBps) [2024-12-07T04:13:57.905Z] Copying: 826/1024 [MB] (24 MBps) [2024-12-07T04:13:58.841Z] Copying: 848/1024 [MB] (22 MBps) [2024-12-07T04:13:59.776Z] Copying: 870/1024 [MB] (22 MBps) [2024-12-07T04:14:00.711Z] Copying: 893/1024 [MB] (22 MBps) [2024-12-07T04:14:01.647Z] Copying: 915/1024 [MB] (22 MBps) [2024-12-07T04:14:03.023Z] Copying: 938/1024 [MB] (22 MBps) [2024-12-07T04:14:03.959Z] Copying: 962/1024 [MB] (24 MBps) [2024-12-07T04:14:04.896Z] Copying: 984/1024 [MB] (21 MBps) [2024-12-07T04:14:05.832Z] Copying: 1008/1024 [MB] (23 MBps) [2024-12-07T04:14:06.091Z] Copying: 1023/1024 [MB] (15 MBps) [2024-12-07T04:14:06.091Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-12-07 04:14:05.974364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:05.974430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:23.355 [2024-12-07 04:14:05.974447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:23.355 [2024-12-07 04:14:05.974458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.355 [2024-12-07 04:14:05.975289] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:23.355 [2024-12-07 04:14:05.981072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:05.981241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:23.355 [2024-12-07 04:14:05.981264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.757 ms 00:29:23.355 [2024-12-07 04:14:05.981282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.355 [2024-12-07 04:14:05.991657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:05.991710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:23.355 [2024-12-07 04:14:05.991724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.772 ms 00:29:23.355 [2024-12-07 04:14:05.991734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.355 [2024-12-07 04:14:06.015714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:06.015898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:23.355 [2024-12-07 04:14:06.015920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.998 ms 00:29:23.355 [2024-12-07 04:14:06.015945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.355 [2024-12-07 04:14:06.020855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:06.020888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:23.355 [2024-12-07 04:14:06.020900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.873 ms 00:29:23.355 [2024-12-07 04:14:06.020909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.355 [2024-12-07 04:14:06.055368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:06.055403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:23.355 [2024-12-07 04:14:06.055416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.446 ms 00:29:23.355 [2024-12-07 04:14:06.055425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.355 [2024-12-07 04:14:06.075084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.355 [2024-12-07 04:14:06.075256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:23.355 [2024-12-07 04:14:06.075293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.655 ms 00:29:23.355 [2024-12-07 04:14:06.075306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.614 [2024-12-07 04:14:06.192381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.614 [2024-12-07 04:14:06.192436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:23.614 [2024-12-07 04:14:06.192458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 117.225 ms 00:29:23.614 [2024-12-07 04:14:06.192469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.614 [2024-12-07 04:14:06.227611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.614 [2024-12-07 04:14:06.227646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:23.614 [2024-12-07 04:14:06.227658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.182 ms 00:29:23.614 [2024-12-07 04:14:06.227680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.614 [2024-12-07 04:14:06.262149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.614 [2024-12-07 04:14:06.262324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:23.614 [2024-12-07 04:14:06.262361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.488 ms 00:29:23.614 [2024-12-07 04:14:06.262371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.614 [2024-12-07 04:14:06.296713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.614 [2024-12-07 04:14:06.296747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:23.614 [2024-12-07 04:14:06.296760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.361 ms 00:29:23.614 [2024-12-07 04:14:06.296785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.614 [2024-12-07 04:14:06.330878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.614 [2024-12-07 04:14:06.330914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:23.614 [2024-12-07 04:14:06.330952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.077 ms 00:29:23.614 [2024-12-07 04:14:06.330963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.614 [2024-12-07 04:14:06.330999] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:23.614 [2024-12-07 04:14:06.331014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 107520 / 261120 wr_cnt: 1 state: open 00:29:23.614 [2024-12-07 04:14:06.331027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:23.614 [2024-12-07 04:14:06.331440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.331997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:23.615 [2024-12-07 04:14:06.332109] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:23.615 [2024-12-07 04:14:06.332118] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90ccfb34-21a9-40df-a0bb-b5a05cb6ac2e 00:29:23.615 [2024-12-07 04:14:06.332145] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 107520 00:29:23.615 [2024-12-07 04:14:06.332154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108480 00:29:23.615 [2024-12-07 04:14:06.332164] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 107520 00:29:23.615 [2024-12-07 04:14:06.332175] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:29:23.615 [2024-12-07 04:14:06.332185] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:23.615 [2024-12-07 04:14:06.332194] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:23.615 [2024-12-07 04:14:06.332204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:23.615 [2024-12-07 04:14:06.332213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:23.615 [2024-12-07 04:14:06.332222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:23.615 [2024-12-07 04:14:06.332231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.615 [2024-12-07 04:14:06.332241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:23.615 [2024-12-07 04:14:06.332252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:29:23.615 [2024-12-07 04:14:06.332262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.352069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.874 [2024-12-07 04:14:06.352101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:23.874 [2024-12-07 04:14:06.352114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.806 ms 00:29:23.874 [2024-12-07 04:14:06.352124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.352737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:23.874 [2024-12-07 04:14:06.352752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:23.874 [2024-12-07 04:14:06.352767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:29:23.874 [2024-12-07 04:14:06.352778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.402396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.874 [2024-12-07 04:14:06.402430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:23.874 [2024-12-07 04:14:06.402442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.874 [2024-12-07 04:14:06.402453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.402503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.874 [2024-12-07 04:14:06.402514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:23.874 [2024-12-07 04:14:06.402529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.874 [2024-12-07 04:14:06.402539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.402621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.874 [2024-12-07 04:14:06.402634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:23.874 [2024-12-07 04:14:06.402645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.874 [2024-12-07 04:14:06.402654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.402670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.874 [2024-12-07 04:14:06.402680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:23.874 [2024-12-07 04:14:06.402690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.874 [2024-12-07 04:14:06.402700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:23.874 [2024-12-07 04:14:06.516246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:23.874 [2024-12-07 04:14:06.516297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:23.875 [2024-12-07 04:14:06.516311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:23.875 [2024-12-07 04:14:06.516321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.134 [2024-12-07 04:14:06.613918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.134 [2024-12-07 04:14:06.613995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:24.134 [2024-12-07 04:14:06.614010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.134 [2024-12-07 04:14:06.614026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.134 [2024-12-07 04:14:06.614114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.134 [2024-12-07 04:14:06.614126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:24.134 [2024-12-07 04:14:06.614138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.134 [2024-12-07 04:14:06.614148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.134 [2024-12-07 04:14:06.614186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.134 [2024-12-07 04:14:06.614197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:24.134 [2024-12-07 04:14:06.614208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.134 [2024-12-07 04:14:06.614219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.134 [2024-12-07 04:14:06.614507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.134 [2024-12-07 04:14:06.614522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:24.134 [2024-12-07 04:14:06.614532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.134 [2024-12-07 04:14:06.614543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.134 [2024-12-07 04:14:06.614583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.134 [2024-12-07 04:14:06.614595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:24.134 [2024-12-07 04:14:06.614606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.134 [2024-12-07 04:14:06.614615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.134 [2024-12-07 04:14:06.614658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.134 [2024-12-07 04:14:06.614670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:24.134 [2024-12-07 04:14:06.614680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.134 [2024-12-07 04:14:06.614690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.135 [2024-12-07 04:14:06.614730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:24.135 [2024-12-07 04:14:06.614742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:24.135 [2024-12-07 04:14:06.614752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:24.135 [2024-12-07 04:14:06.614762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:24.135 [2024-12-07 04:14:06.614894] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 642.465 ms, result 0 00:29:25.513 00:29:25.513 00:29:25.513 04:14:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:26.889 04:14:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:27.146 [2024-12-07 04:14:09.697987] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:29:27.146 [2024-12-07 04:14:09.698101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82558 ] 00:29:27.146 [2024-12-07 04:14:09.879207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.404 [2024-12-07 04:14:09.987039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.663 [2024-12-07 04:14:10.343330] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:27.663 [2024-12-07 04:14:10.343418] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:27.923 [2024-12-07 04:14:10.504644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.504883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:27.923 [2024-12-07 04:14:10.504925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:27.923 [2024-12-07 04:14:10.504938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.505018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.505035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:27.923 [2024-12-07 04:14:10.505046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:29:27.923 [2024-12-07 04:14:10.505057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.505079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:27.923 [2024-12-07 04:14:10.506139] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:27.923 [2024-12-07 04:14:10.506169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.506180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:27.923 [2024-12-07 04:14:10.506201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:29:27.923 [2024-12-07 04:14:10.506211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.507768] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:27.923 [2024-12-07 04:14:10.526400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.526568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:27.923 [2024-12-07 04:14:10.526590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.662 ms 00:29:27.923 [2024-12-07 04:14:10.526602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.526707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.526720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:27.923 [2024-12-07 04:14:10.526732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:29:27.923 [2024-12-07 04:14:10.526742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.533696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.533725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:27.923 [2024-12-07 04:14:10.533736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.892 ms 00:29:27.923 [2024-12-07 04:14:10.533749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.533823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.533834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:27.923 [2024-12-07 04:14:10.533844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:29:27.923 [2024-12-07 04:14:10.533853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.533891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.533902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:27.923 [2024-12-07 04:14:10.533912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:27.923 [2024-12-07 04:14:10.533921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.533982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:27.923 [2024-12-07 04:14:10.538703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.538734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:27.923 [2024-12-07 04:14:10.538750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:29:27.923 [2024-12-07 04:14:10.538760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.538793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.538804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:27.923 [2024-12-07 04:14:10.538814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:27.923 [2024-12-07 04:14:10.538824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.538876] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:27.923 [2024-12-07 04:14:10.538900] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:27.923 [2024-12-07 04:14:10.538948] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:27.923 [2024-12-07 04:14:10.538986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:27.923 [2024-12-07 04:14:10.539089] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:27.923 [2024-12-07 04:14:10.539102] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:27.923 [2024-12-07 04:14:10.539116] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:27.923 [2024-12-07 04:14:10.539130] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539142] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539154] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:27.923 [2024-12-07 04:14:10.539164] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:27.923 [2024-12-07 04:14:10.539176] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:27.923 [2024-12-07 04:14:10.539186] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:27.923 [2024-12-07 04:14:10.539196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.539206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:27.923 [2024-12-07 04:14:10.539216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:29:27.923 [2024-12-07 04:14:10.539226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.539297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.923 [2024-12-07 04:14:10.539308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:27.923 [2024-12-07 04:14:10.539318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:29:27.923 [2024-12-07 04:14:10.539328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.923 [2024-12-07 04:14:10.539425] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:27.923 [2024-12-07 04:14:10.539440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:27.923 [2024-12-07 04:14:10.539450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:27.923 [2024-12-07 04:14:10.539481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:27.923 [2024-12-07 04:14:10.539509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:27.923 [2024-12-07 04:14:10.539527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:27.923 [2024-12-07 04:14:10.539537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:27.923 [2024-12-07 04:14:10.539546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:27.923 [2024-12-07 04:14:10.539564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:27.923 [2024-12-07 04:14:10.539574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:27.923 [2024-12-07 04:14:10.539583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:27.923 [2024-12-07 04:14:10.539601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:27.923 [2024-12-07 04:14:10.539628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:27.923 [2024-12-07 04:14:10.539656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539665] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.923 [2024-12-07 04:14:10.539674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:27.923 [2024-12-07 04:14:10.539683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:27.923 [2024-12-07 04:14:10.539692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.924 [2024-12-07 04:14:10.539701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:27.924 [2024-12-07 04:14:10.539710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:27.924 [2024-12-07 04:14:10.539719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:27.924 [2024-12-07 04:14:10.539728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:27.924 [2024-12-07 04:14:10.539737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:27.924 [2024-12-07 04:14:10.539746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:27.924 [2024-12-07 04:14:10.539755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:27.924 [2024-12-07 04:14:10.539764] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:27.924 [2024-12-07 04:14:10.539772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:27.924 [2024-12-07 04:14:10.539781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:27.924 [2024-12-07 04:14:10.539790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:27.924 [2024-12-07 04:14:10.539799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.924 [2024-12-07 04:14:10.539808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:27.924 [2024-12-07 04:14:10.539817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:27.924 [2024-12-07 04:14:10.539826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.924 [2024-12-07 04:14:10.539836] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:27.924 [2024-12-07 04:14:10.539846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:27.924 [2024-12-07 04:14:10.539856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:27.924 [2024-12-07 04:14:10.539865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:27.924 [2024-12-07 04:14:10.539876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:27.924 [2024-12-07 04:14:10.539885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:27.924 [2024-12-07 04:14:10.539895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:27.924 [2024-12-07 04:14:10.539904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:27.924 [2024-12-07 04:14:10.539913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:27.924 [2024-12-07 04:14:10.539923] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:27.924 [2024-12-07 04:14:10.539945] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:27.924 [2024-12-07 04:14:10.539958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.539974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:27.924 [2024-12-07 04:14:10.539985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:27.924 [2024-12-07 04:14:10.539996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:27.924 [2024-12-07 04:14:10.540006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:27.924 [2024-12-07 04:14:10.540016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:27.924 [2024-12-07 04:14:10.540027] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:27.924 [2024-12-07 04:14:10.540037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:27.924 [2024-12-07 04:14:10.540047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:27.924 [2024-12-07 04:14:10.540057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:27.924 [2024-12-07 04:14:10.540067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.540077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.540087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.540097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.540107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:27.924 [2024-12-07 04:14:10.540118] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:27.924 [2024-12-07 04:14:10.540130] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.540140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:27.924 [2024-12-07 04:14:10.540150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:27.924 [2024-12-07 04:14:10.540161] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:27.924 [2024-12-07 04:14:10.540171] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:27.924 [2024-12-07 04:14:10.540183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.540193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:27.924 [2024-12-07 04:14:10.540203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.813 ms 00:29:27.924 [2024-12-07 04:14:10.540213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.579104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.579137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:27.924 [2024-12-07 04:14:10.579151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.906 ms 00:29:27.924 [2024-12-07 04:14:10.579181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.579256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.579267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:27.924 [2024-12-07 04:14:10.579278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:29:27.924 [2024-12-07 04:14:10.579288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.634855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.635023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:27.924 [2024-12-07 04:14:10.635046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.600 ms 00:29:27.924 [2024-12-07 04:14:10.635057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.635094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.635105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:27.924 [2024-12-07 04:14:10.635120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:29:27.924 [2024-12-07 04:14:10.635131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.635623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.635638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:27.924 [2024-12-07 04:14:10.635649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:29:27.924 [2024-12-07 04:14:10.635659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.635773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.635787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:27.924 [2024-12-07 04:14:10.635803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:29:27.924 [2024-12-07 04:14:10.635814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:27.924 [2024-12-07 04:14:10.655268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:27.924 [2024-12-07 04:14:10.655303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:27.924 [2024-12-07 04:14:10.655315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.465 ms 00:29:27.924 [2024-12-07 04:14:10.655326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.674217] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:29:28.184 [2024-12-07 04:14:10.674364] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:28.184 [2024-12-07 04:14:10.674400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.674411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:28.184 [2024-12-07 04:14:10.674423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.002 ms 00:29:28.184 [2024-12-07 04:14:10.674433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.705605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.705752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:28.184 [2024-12-07 04:14:10.705774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.182 ms 00:29:28.184 [2024-12-07 04:14:10.705785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.724322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.724482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:28.184 [2024-12-07 04:14:10.724504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.501 ms 00:29:28.184 [2024-12-07 04:14:10.724514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.742123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.742260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:28.184 [2024-12-07 04:14:10.742279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.601 ms 00:29:28.184 [2024-12-07 04:14:10.742289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.743152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.743184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:28.184 [2024-12-07 04:14:10.743200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:29:28.184 [2024-12-07 04:14:10.743210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.825702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.825756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:28.184 [2024-12-07 04:14:10.825778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.603 ms 00:29:28.184 [2024-12-07 04:14:10.825788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.835966] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:28.184 [2024-12-07 04:14:10.838530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.838558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:28.184 [2024-12-07 04:14:10.838571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.717 ms 00:29:28.184 [2024-12-07 04:14:10.838597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.838675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.838688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:28.184 [2024-12-07 04:14:10.838703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:28.184 [2024-12-07 04:14:10.838714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.840231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.840269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:28.184 [2024-12-07 04:14:10.840282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.477 ms 00:29:28.184 [2024-12-07 04:14:10.840293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.840321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.840332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:28.184 [2024-12-07 04:14:10.840343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:28.184 [2024-12-07 04:14:10.840363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.840404] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:28.184 [2024-12-07 04:14:10.840417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.840429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:28.184 [2024-12-07 04:14:10.840439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:29:28.184 [2024-12-07 04:14:10.840448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.875052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.875089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:28.184 [2024-12-07 04:14:10.875108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.640 ms 00:29:28.184 [2024-12-07 04:14:10.875134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.875201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:28.184 [2024-12-07 04:14:10.875213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:28.184 [2024-12-07 04:14:10.875223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:29:28.184 [2024-12-07 04:14:10.875233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:28.184 [2024-12-07 04:14:10.876314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 371.829 ms, result 0 00:29:29.564  [2024-12-07T04:14:13.236Z] Copying: 1312/1048576 [kB] (1312 kBps) [2024-12-07T04:14:14.173Z] Copying: 11116/1048576 [kB] (9804 kBps) [2024-12-07T04:14:15.110Z] Copying: 41/1024 [MB] (30 MBps) [2024-12-07T04:14:16.496Z] Copying: 71/1024 [MB] (30 MBps) [2024-12-07T04:14:17.435Z] Copying: 102/1024 [MB] (31 MBps) [2024-12-07T04:14:18.373Z] Copying: 135/1024 [MB] (32 MBps) [2024-12-07T04:14:19.310Z] Copying: 167/1024 [MB] (31 MBps) [2024-12-07T04:14:20.248Z] Copying: 198/1024 [MB] (30 MBps) [2024-12-07T04:14:21.186Z] Copying: 230/1024 [MB] (32 MBps) [2024-12-07T04:14:22.126Z] Copying: 261/1024 [MB] (31 MBps) [2024-12-07T04:14:23.509Z] Copying: 292/1024 [MB] (31 MBps) [2024-12-07T04:14:24.079Z] Copying: 323/1024 [MB] (31 MBps) [2024-12-07T04:14:25.463Z] Copying: 354/1024 [MB] (30 MBps) [2024-12-07T04:14:26.398Z] Copying: 385/1024 [MB] (30 MBps) [2024-12-07T04:14:27.332Z] Copying: 416/1024 [MB] (31 MBps) [2024-12-07T04:14:28.269Z] Copying: 447/1024 [MB] (30 MBps) [2024-12-07T04:14:29.207Z] Copying: 478/1024 [MB] (31 MBps) [2024-12-07T04:14:30.243Z] Copying: 510/1024 [MB] (31 MBps) [2024-12-07T04:14:31.198Z] Copying: 541/1024 [MB] (31 MBps) [2024-12-07T04:14:32.133Z] Copying: 572/1024 [MB] (31 MBps) [2024-12-07T04:14:33.065Z] Copying: 602/1024 [MB] (30 MBps) [2024-12-07T04:14:34.459Z] Copying: 634/1024 [MB] (31 MBps) [2024-12-07T04:14:35.394Z] Copying: 667/1024 [MB] (33 MBps) [2024-12-07T04:14:36.327Z] Copying: 700/1024 [MB] (33 MBps) [2024-12-07T04:14:37.260Z] Copying: 731/1024 [MB] (31 MBps) [2024-12-07T04:14:38.196Z] Copying: 762/1024 [MB] (30 MBps) [2024-12-07T04:14:39.134Z] Copying: 794/1024 [MB] (31 MBps) [2024-12-07T04:14:40.072Z] Copying: 826/1024 [MB] (32 MBps) [2024-12-07T04:14:41.452Z] Copying: 857/1024 [MB] (31 MBps) [2024-12-07T04:14:42.391Z] Copying: 890/1024 [MB] (32 MBps) [2024-12-07T04:14:43.331Z] Copying: 922/1024 [MB] (31 MBps) [2024-12-07T04:14:44.270Z] Copying: 953/1024 [MB] (31 MBps) [2024-12-07T04:14:45.214Z] Copying: 985/1024 [MB] (31 MBps) [2024-12-07T04:14:45.480Z] Copying: 1017/1024 [MB] (32 MBps) [2024-12-07T04:14:46.862Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-12-07 04:14:46.472682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.126 [2024-12-07 04:14:46.472824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:04.126 [2024-12-07 04:14:46.472873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:30:04.126 [2024-12-07 04:14:46.472909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.126 [2024-12-07 04:14:46.473105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:04.126 [2024-12-07 04:14:46.486458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.126 [2024-12-07 04:14:46.486757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:04.126 [2024-12-07 04:14:46.486802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.319 ms 00:30:04.126 [2024-12-07 04:14:46.486823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.126 [2024-12-07 04:14:46.487319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.126 [2024-12-07 04:14:46.487385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:04.126 [2024-12-07 04:14:46.487410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:30:04.126 [2024-12-07 04:14:46.487429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.126 [2024-12-07 04:14:46.503742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.126 [2024-12-07 04:14:46.503809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:04.126 [2024-12-07 04:14:46.503830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.305 ms 00:30:04.126 [2024-12-07 04:14:46.503845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.126 [2024-12-07 04:14:46.509395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.126 [2024-12-07 04:14:46.509611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:04.126 [2024-12-07 04:14:46.509644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.509 ms 00:30:04.127 [2024-12-07 04:14:46.509655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.545480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.545522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:04.127 [2024-12-07 04:14:46.545536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.809 ms 00:30:04.127 [2024-12-07 04:14:46.545546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.565692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.565733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:04.127 [2024-12-07 04:14:46.565747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.133 ms 00:30:04.127 [2024-12-07 04:14:46.565758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.567996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.568037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:04.127 [2024-12-07 04:14:46.568050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:30:04.127 [2024-12-07 04:14:46.568067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.603689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.603724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:04.127 [2024-12-07 04:14:46.603737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.662 ms 00:30:04.127 [2024-12-07 04:14:46.603746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.638114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.638252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:04.127 [2024-12-07 04:14:46.638271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.386 ms 00:30:04.127 [2024-12-07 04:14:46.638296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.672154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.672296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:04.127 [2024-12-07 04:14:46.672315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.819 ms 00:30:04.127 [2024-12-07 04:14:46.672340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.706024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.127 [2024-12-07 04:14:46.706060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:04.127 [2024-12-07 04:14:46.706072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.632 ms 00:30:04.127 [2024-12-07 04:14:46.706081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.127 [2024-12-07 04:14:46.706116] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:04.127 [2024-12-07 04:14:46.706131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:04.127 [2024-12-07 04:14:46.706143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:04.127 [2024-12-07 04:14:46.706154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:04.127 [2024-12-07 04:14:46.706620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.706994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:04.128 [2024-12-07 04:14:46.707246] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:04.128 [2024-12-07 04:14:46.707255] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90ccfb34-21a9-40df-a0bb-b5a05cb6ac2e 00:30:04.128 [2024-12-07 04:14:46.707266] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:04.128 [2024-12-07 04:14:46.707276] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157120 00:30:04.128 [2024-12-07 04:14:46.707289] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155136 00:30:04.128 [2024-12-07 04:14:46.707300] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0128 00:30:04.128 [2024-12-07 04:14:46.707309] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:04.128 [2024-12-07 04:14:46.707329] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:04.128 [2024-12-07 04:14:46.707339] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:04.128 [2024-12-07 04:14:46.707348] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:04.128 [2024-12-07 04:14:46.707357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:04.128 [2024-12-07 04:14:46.707368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.128 [2024-12-07 04:14:46.707378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:04.128 [2024-12-07 04:14:46.707389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:30:04.128 [2024-12-07 04:14:46.707398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.128 [2024-12-07 04:14:46.726626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.129 [2024-12-07 04:14:46.726761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:04.129 [2024-12-07 04:14:46.726795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.224 ms 00:30:04.129 [2024-12-07 04:14:46.726812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.129 [2024-12-07 04:14:46.727396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:04.129 [2024-12-07 04:14:46.727412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:04.129 [2024-12-07 04:14:46.727423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:30:04.129 [2024-12-07 04:14:46.727433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.129 [2024-12-07 04:14:46.775671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.129 [2024-12-07 04:14:46.775704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:04.129 [2024-12-07 04:14:46.775717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.129 [2024-12-07 04:14:46.775743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.129 [2024-12-07 04:14:46.775792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.129 [2024-12-07 04:14:46.775802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:04.129 [2024-12-07 04:14:46.775812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.129 [2024-12-07 04:14:46.775821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.129 [2024-12-07 04:14:46.775896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.129 [2024-12-07 04:14:46.775909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:04.129 [2024-12-07 04:14:46.775920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.129 [2024-12-07 04:14:46.775929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.129 [2024-12-07 04:14:46.776089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.129 [2024-12-07 04:14:46.776144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:04.129 [2024-12-07 04:14:46.776174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.129 [2024-12-07 04:14:46.776202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.897101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.897270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:04.388 [2024-12-07 04:14:46.897308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.897319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:04.388 [2024-12-07 04:14:46.992122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:04.388 [2024-12-07 04:14:46.992246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:04.388 [2024-12-07 04:14:46.992311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:04.388 [2024-12-07 04:14:46.992457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:04.388 [2024-12-07 04:14:46.992520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:04.388 [2024-12-07 04:14:46.992591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:04.388 [2024-12-07 04:14:46.992649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:04.388 [2024-12-07 04:14:46.992659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:04.388 [2024-12-07 04:14:46.992669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:04.388 [2024-12-07 04:14:46.992782] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 520.968 ms, result 0 00:30:05.325 00:30:05.325 00:30:05.325 04:14:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:07.228 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:30:07.228 04:14:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:07.228 [2024-12-07 04:14:49.731811] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:30:07.228 [2024-12-07 04:14:49.732121] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82956 ] 00:30:07.228 [2024-12-07 04:14:49.914165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.487 [2024-12-07 04:14:50.024988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.746 [2024-12-07 04:14:50.381086] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:07.746 [2024-12-07 04:14:50.381385] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:30:08.006 [2024-12-07 04:14:50.542769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.542978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:30:08.006 [2024-12-07 04:14:50.543019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:30:08.006 [2024-12-07 04:14:50.543030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.543090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.543105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:08.006 [2024-12-07 04:14:50.543117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:30:08.006 [2024-12-07 04:14:50.543127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.543149] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:30:08.006 [2024-12-07 04:14:50.544135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:30:08.006 [2024-12-07 04:14:50.544157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.544168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:08.006 [2024-12-07 04:14:50.544179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.015 ms 00:30:08.006 [2024-12-07 04:14:50.544189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.545639] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:30:08.006 [2024-12-07 04:14:50.563904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.563950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:30:08.006 [2024-12-07 04:14:50.563964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.296 ms 00:30:08.006 [2024-12-07 04:14:50.563974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.564056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.564068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:30:08.006 [2024-12-07 04:14:50.564079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:30:08.006 [2024-12-07 04:14:50.564089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.571070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.571099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:08.006 [2024-12-07 04:14:50.571111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.905 ms 00:30:08.006 [2024-12-07 04:14:50.571125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.571200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.571213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:08.006 [2024-12-07 04:14:50.571223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:30:08.006 [2024-12-07 04:14:50.571233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.571269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.571280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:30:08.006 [2024-12-07 04:14:50.571291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:08.006 [2024-12-07 04:14:50.571300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.571327] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:30:08.006 [2024-12-07 04:14:50.576039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.576068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:08.006 [2024-12-07 04:14:50.576083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.724 ms 00:30:08.006 [2024-12-07 04:14:50.576093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.576123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.576133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:30:08.006 [2024-12-07 04:14:50.576144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:30:08.006 [2024-12-07 04:14:50.576153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.576199] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:30:08.006 [2024-12-07 04:14:50.576224] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:30:08.006 [2024-12-07 04:14:50.576263] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:30:08.006 [2024-12-07 04:14:50.576285] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:30:08.006 [2024-12-07 04:14:50.576366] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:30:08.006 [2024-12-07 04:14:50.576379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:30:08.006 [2024-12-07 04:14:50.576391] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:30:08.006 [2024-12-07 04:14:50.576404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:30:08.006 [2024-12-07 04:14:50.576415] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:30:08.006 [2024-12-07 04:14:50.576426] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:30:08.006 [2024-12-07 04:14:50.576436] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:30:08.006 [2024-12-07 04:14:50.576448] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:30:08.006 [2024-12-07 04:14:50.576457] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:30:08.006 [2024-12-07 04:14:50.576466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.576476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:30:08.006 [2024-12-07 04:14:50.576486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:30:08.006 [2024-12-07 04:14:50.576495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.576561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.006 [2024-12-07 04:14:50.576571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:30:08.006 [2024-12-07 04:14:50.576581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:30:08.006 [2024-12-07 04:14:50.576590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.006 [2024-12-07 04:14:50.576674] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:30:08.006 [2024-12-07 04:14:50.576688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:30:08.006 [2024-12-07 04:14:50.576697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.006 [2024-12-07 04:14:50.576707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.006 [2024-12-07 04:14:50.576717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:30:08.006 [2024-12-07 04:14:50.576726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:30:08.006 [2024-12-07 04:14:50.576735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:30:08.006 [2024-12-07 04:14:50.576744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:30:08.006 [2024-12-07 04:14:50.576753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:30:08.006 [2024-12-07 04:14:50.576762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.006 [2024-12-07 04:14:50.576772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:30:08.006 [2024-12-07 04:14:50.576781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:30:08.006 [2024-12-07 04:14:50.576789] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:30:08.006 [2024-12-07 04:14:50.576807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:30:08.006 [2024-12-07 04:14:50.576816] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:30:08.006 [2024-12-07 04:14:50.576825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.006 [2024-12-07 04:14:50.576834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:30:08.006 [2024-12-07 04:14:50.576842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:30:08.006 [2024-12-07 04:14:50.576851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.006 [2024-12-07 04:14:50.576860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:30:08.006 [2024-12-07 04:14:50.576868] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:30:08.006 [2024-12-07 04:14:50.576877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.006 [2024-12-07 04:14:50.576886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:30:08.007 [2024-12-07 04:14:50.576895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:30:08.007 [2024-12-07 04:14:50.576903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.007 [2024-12-07 04:14:50.576911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:30:08.007 [2024-12-07 04:14:50.576919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:30:08.007 [2024-12-07 04:14:50.576943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.007 [2024-12-07 04:14:50.576952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:30:08.007 [2024-12-07 04:14:50.576961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:30:08.007 [2024-12-07 04:14:50.576986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:30:08.007 [2024-12-07 04:14:50.576995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:30:08.007 [2024-12-07 04:14:50.577004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:30:08.007 [2024-12-07 04:14:50.577014] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.007 [2024-12-07 04:14:50.577023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:30:08.007 [2024-12-07 04:14:50.577032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:30:08.007 [2024-12-07 04:14:50.577059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:30:08.007 [2024-12-07 04:14:50.577068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:30:08.007 [2024-12-07 04:14:50.577077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:30:08.007 [2024-12-07 04:14:50.577086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.007 [2024-12-07 04:14:50.577095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:30:08.007 [2024-12-07 04:14:50.577105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:30:08.007 [2024-12-07 04:14:50.577115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.007 [2024-12-07 04:14:50.577125] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:30:08.007 [2024-12-07 04:14:50.577136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:30:08.007 [2024-12-07 04:14:50.577146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:30:08.007 [2024-12-07 04:14:50.577155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:30:08.007 [2024-12-07 04:14:50.577165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:30:08.007 [2024-12-07 04:14:50.577174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:30:08.007 [2024-12-07 04:14:50.577184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:30:08.007 [2024-12-07 04:14:50.577194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:30:08.007 [2024-12-07 04:14:50.577212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:30:08.007 [2024-12-07 04:14:50.577221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:30:08.007 [2024-12-07 04:14:50.577233] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:30:08.007 [2024-12-07 04:14:50.577245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:30:08.007 [2024-12-07 04:14:50.577271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:30:08.007 [2024-12-07 04:14:50.577282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:30:08.007 [2024-12-07 04:14:50.577292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:30:08.007 [2024-12-07 04:14:50.577302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:30:08.007 [2024-12-07 04:14:50.577312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:30:08.007 [2024-12-07 04:14:50.577323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:30:08.007 [2024-12-07 04:14:50.577333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:30:08.007 [2024-12-07 04:14:50.577344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:30:08.007 [2024-12-07 04:14:50.577353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577384] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:30:08.007 [2024-12-07 04:14:50.577404] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:30:08.007 [2024-12-07 04:14:50.577416] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577426] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:08.007 [2024-12-07 04:14:50.577437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:30:08.007 [2024-12-07 04:14:50.577447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:30:08.007 [2024-12-07 04:14:50.577457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:30:08.007 [2024-12-07 04:14:50.577483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.577493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:30:08.007 [2024-12-07 04:14:50.577503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:30:08.007 [2024-12-07 04:14:50.577513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.615017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.615055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:08.007 [2024-12-07 04:14:50.615068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.515 ms 00:30:08.007 [2024-12-07 04:14:50.615098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.615173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.615183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:30:08.007 [2024-12-07 04:14:50.615194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:30:08.007 [2024-12-07 04:14:50.615203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.672518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.672686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:08.007 [2024-12-07 04:14:50.672820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.350 ms 00:30:08.007 [2024-12-07 04:14:50.672859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.672915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.672971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:08.007 [2024-12-07 04:14:50.673011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:08.007 [2024-12-07 04:14:50.673100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.673634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.673679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:08.007 [2024-12-07 04:14:50.673852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:30:08.007 [2024-12-07 04:14:50.673890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.674052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.674252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:08.007 [2024-12-07 04:14:50.674299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:30:08.007 [2024-12-07 04:14:50.674311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.692608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.692760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:08.007 [2024-12-07 04:14:50.692796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.289 ms 00:30:08.007 [2024-12-07 04:14:50.692807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.711151] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:08.007 [2024-12-07 04:14:50.711280] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:30:08.007 [2024-12-07 04:14:50.711299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.711309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:30:08.007 [2024-12-07 04:14:50.711337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.418 ms 00:30:08.007 [2024-12-07 04:14:50.711347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.007 [2024-12-07 04:14:50.739486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.007 [2024-12-07 04:14:50.739525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:30:08.007 [2024-12-07 04:14:50.739540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.097 ms 00:30:08.007 [2024-12-07 04:14:50.739551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.267 [2024-12-07 04:14:50.757592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.267 [2024-12-07 04:14:50.757626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:30:08.267 [2024-12-07 04:14:50.757639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.014 ms 00:30:08.267 [2024-12-07 04:14:50.757648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.267 [2024-12-07 04:14:50.774718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.774751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:30:08.268 [2024-12-07 04:14:50.774763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.059 ms 00:30:08.268 [2024-12-07 04:14:50.774789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.775562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.775595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:30:08.268 [2024-12-07 04:14:50.775611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:30:08.268 [2024-12-07 04:14:50.775621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.858251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.858315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:30:08.268 [2024-12-07 04:14:50.858344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.741 ms 00:30:08.268 [2024-12-07 04:14:50.858355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.868707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:30:08.268 [2024-12-07 04:14:50.870986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.871017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:30:08.268 [2024-12-07 04:14:50.871030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.606 ms 00:30:08.268 [2024-12-07 04:14:50.871040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.871113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.871126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:30:08.268 [2024-12-07 04:14:50.871140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:30:08.268 [2024-12-07 04:14:50.871151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.872041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.872062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:30:08.268 [2024-12-07 04:14:50.872073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.850 ms 00:30:08.268 [2024-12-07 04:14:50.872083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.872104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.872115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:30:08.268 [2024-12-07 04:14:50.872125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:30:08.268 [2024-12-07 04:14:50.872135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.872172] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:30:08.268 [2024-12-07 04:14:50.872184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.872194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:30:08.268 [2024-12-07 04:14:50.872204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:30:08.268 [2024-12-07 04:14:50.872214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.907532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.907570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:30:08.268 [2024-12-07 04:14:50.907590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.355 ms 00:30:08.268 [2024-12-07 04:14:50.907601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.907671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:08.268 [2024-12-07 04:14:50.907682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:30:08.268 [2024-12-07 04:14:50.907693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:30:08.268 [2024-12-07 04:14:50.907703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:08.268 [2024-12-07 04:14:50.908843] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 366.225 ms, result 0 00:30:09.648  [2024-12-07T04:14:53.323Z] Copying: 24/1024 [MB] (24 MBps) [2024-12-07T04:14:54.262Z] Copying: 50/1024 [MB] (25 MBps) [2024-12-07T04:14:55.264Z] Copying: 76/1024 [MB] (26 MBps) [2024-12-07T04:14:56.202Z] Copying: 102/1024 [MB] (26 MBps) [2024-12-07T04:14:57.138Z] Copying: 129/1024 [MB] (26 MBps) [2024-12-07T04:14:58.516Z] Copying: 155/1024 [MB] (26 MBps) [2024-12-07T04:14:59.453Z] Copying: 181/1024 [MB] (25 MBps) [2024-12-07T04:15:00.390Z] Copying: 206/1024 [MB] (25 MBps) [2024-12-07T04:15:01.327Z] Copying: 232/1024 [MB] (25 MBps) [2024-12-07T04:15:02.265Z] Copying: 259/1024 [MB] (27 MBps) [2024-12-07T04:15:03.204Z] Copying: 286/1024 [MB] (26 MBps) [2024-12-07T04:15:04.136Z] Copying: 314/1024 [MB] (28 MBps) [2024-12-07T04:15:05.514Z] Copying: 340/1024 [MB] (26 MBps) [2024-12-07T04:15:06.448Z] Copying: 366/1024 [MB] (26 MBps) [2024-12-07T04:15:07.385Z] Copying: 392/1024 [MB] (25 MBps) [2024-12-07T04:15:08.319Z] Copying: 419/1024 [MB] (27 MBps) [2024-12-07T04:15:09.254Z] Copying: 446/1024 [MB] (27 MBps) [2024-12-07T04:15:10.188Z] Copying: 473/1024 [MB] (26 MBps) [2024-12-07T04:15:11.127Z] Copying: 499/1024 [MB] (26 MBps) [2024-12-07T04:15:12.509Z] Copying: 525/1024 [MB] (25 MBps) [2024-12-07T04:15:13.449Z] Copying: 551/1024 [MB] (25 MBps) [2024-12-07T04:15:14.391Z] Copying: 576/1024 [MB] (25 MBps) [2024-12-07T04:15:15.333Z] Copying: 602/1024 [MB] (25 MBps) [2024-12-07T04:15:16.270Z] Copying: 629/1024 [MB] (26 MBps) [2024-12-07T04:15:17.210Z] Copying: 655/1024 [MB] (26 MBps) [2024-12-07T04:15:18.151Z] Copying: 681/1024 [MB] (26 MBps) [2024-12-07T04:15:19.090Z] Copying: 708/1024 [MB] (26 MBps) [2024-12-07T04:15:20.471Z] Copying: 735/1024 [MB] (26 MBps) [2024-12-07T04:15:21.408Z] Copying: 762/1024 [MB] (26 MBps) [2024-12-07T04:15:22.352Z] Copying: 787/1024 [MB] (25 MBps) [2024-12-07T04:15:23.291Z] Copying: 813/1024 [MB] (25 MBps) [2024-12-07T04:15:24.229Z] Copying: 839/1024 [MB] (26 MBps) [2024-12-07T04:15:25.168Z] Copying: 865/1024 [MB] (25 MBps) [2024-12-07T04:15:26.109Z] Copying: 891/1024 [MB] (25 MBps) [2024-12-07T04:15:27.096Z] Copying: 917/1024 [MB] (25 MBps) [2024-12-07T04:15:28.471Z] Copying: 943/1024 [MB] (26 MBps) [2024-12-07T04:15:29.405Z] Copying: 970/1024 [MB] (26 MBps) [2024-12-07T04:15:30.343Z] Copying: 997/1024 [MB] (26 MBps) [2024-12-07T04:15:30.343Z] Copying: 1023/1024 [MB] (26 MBps) [2024-12-07T04:15:30.343Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-12-07 04:15:30.297476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.607 [2024-12-07 04:15:30.297908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:30:47.607 [2024-12-07 04:15:30.298059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:30:47.607 [2024-12-07 04:15:30.298105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.607 [2024-12-07 04:15:30.298255] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:30:47.607 [2024-12-07 04:15:30.303500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.607 [2024-12-07 04:15:30.303693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:30:47.607 [2024-12-07 04:15:30.303793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.177 ms 00:30:47.607 [2024-12-07 04:15:30.303834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.607 [2024-12-07 04:15:30.304289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.607 [2024-12-07 04:15:30.304324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:30:47.607 [2024-12-07 04:15:30.304338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:30:47.607 [2024-12-07 04:15:30.304349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.607 [2024-12-07 04:15:30.307399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.607 [2024-12-07 04:15:30.307558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:30:47.607 [2024-12-07 04:15:30.307580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.038 ms 00:30:47.607 [2024-12-07 04:15:30.307600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.607 [2024-12-07 04:15:30.313689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.607 [2024-12-07 04:15:30.313734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:30:47.607 [2024-12-07 04:15:30.313747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.066 ms 00:30:47.607 [2024-12-07 04:15:30.313758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.868 [2024-12-07 04:15:30.351401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.868 [2024-12-07 04:15:30.351442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:30:47.868 [2024-12-07 04:15:30.351458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.619 ms 00:30:47.868 [2024-12-07 04:15:30.351468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.868 [2024-12-07 04:15:30.372037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.868 [2024-12-07 04:15:30.372076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:30:47.868 [2024-12-07 04:15:30.372089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.548 ms 00:30:47.868 [2024-12-07 04:15:30.372099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.868 [2024-12-07 04:15:30.374349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.868 [2024-12-07 04:15:30.374388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:30:47.868 [2024-12-07 04:15:30.374401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:30:47.868 [2024-12-07 04:15:30.374412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.868 [2024-12-07 04:15:30.409979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.868 [2024-12-07 04:15:30.410012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:30:47.868 [2024-12-07 04:15:30.410024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.607 ms 00:30:47.869 [2024-12-07 04:15:30.410049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.869 [2024-12-07 04:15:30.444802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.869 [2024-12-07 04:15:30.444836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:30:47.869 [2024-12-07 04:15:30.444849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.773 ms 00:30:47.869 [2024-12-07 04:15:30.444858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.869 [2024-12-07 04:15:30.480349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.869 [2024-12-07 04:15:30.480389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:30:47.869 [2024-12-07 04:15:30.480402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.514 ms 00:30:47.869 [2024-12-07 04:15:30.480427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.869 [2024-12-07 04:15:30.514753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.869 [2024-12-07 04:15:30.514790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:30:47.869 [2024-12-07 04:15:30.514802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.307 ms 00:30:47.869 [2024-12-07 04:15:30.514811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.869 [2024-12-07 04:15:30.514862] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:30:47.869 [2024-12-07 04:15:30.514886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:47.869 [2024-12-07 04:15:30.514902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:30:47.869 [2024-12-07 04:15:30.514913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.514923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.514949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.514960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.514971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.514982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.514992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:30:47.869 [2024-12-07 04:15:30.515537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.515994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:30:47.870 [2024-12-07 04:15:30.516011] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:30:47.870 [2024-12-07 04:15:30.516021] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 90ccfb34-21a9-40df-a0bb-b5a05cb6ac2e 00:30:47.870 [2024-12-07 04:15:30.516031] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:30:47.870 [2024-12-07 04:15:30.516041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:30:47.870 [2024-12-07 04:15:30.516051] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:30:47.870 [2024-12-07 04:15:30.516062] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:30:47.870 [2024-12-07 04:15:30.516081] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:30:47.870 [2024-12-07 04:15:30.516092] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:30:47.870 [2024-12-07 04:15:30.516101] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:30:47.870 [2024-12-07 04:15:30.516110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:30:47.870 [2024-12-07 04:15:30.516119] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:30:47.870 [2024-12-07 04:15:30.516129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.870 [2024-12-07 04:15:30.516139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:30:47.870 [2024-12-07 04:15:30.516150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.269 ms 00:30:47.870 [2024-12-07 04:15:30.516164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.870 [2024-12-07 04:15:30.535557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.870 [2024-12-07 04:15:30.535679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:30:47.870 [2024-12-07 04:15:30.535697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.369 ms 00:30:47.870 [2024-12-07 04:15:30.535723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.870 [2024-12-07 04:15:30.536258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:30:47.870 [2024-12-07 04:15:30.536283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:30:47.870 [2024-12-07 04:15:30.536294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:30:47.870 [2024-12-07 04:15:30.536304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.870 [2024-12-07 04:15:30.585423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.870 [2024-12-07 04:15:30.585455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:30:47.870 [2024-12-07 04:15:30.585467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.870 [2024-12-07 04:15:30.585493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.870 [2024-12-07 04:15:30.585544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.870 [2024-12-07 04:15:30.585560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:30:47.870 [2024-12-07 04:15:30.585571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.870 [2024-12-07 04:15:30.585580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.871 [2024-12-07 04:15:30.585641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.871 [2024-12-07 04:15:30.585655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:30:47.871 [2024-12-07 04:15:30.585665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.871 [2024-12-07 04:15:30.585675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:47.871 [2024-12-07 04:15:30.585691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:47.871 [2024-12-07 04:15:30.585701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:30:47.871 [2024-12-07 04:15:30.585716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:47.871 [2024-12-07 04:15:30.585725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.130 [2024-12-07 04:15:30.705572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.130 [2024-12-07 04:15:30.705789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:30:48.131 [2024-12-07 04:15:30.705987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.706027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.802542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.802742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:30:48.131 [2024-12-07 04:15:30.802766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.802777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.802866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.802878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:30:48.131 [2024-12-07 04:15:30.802890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.802900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.802959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.802971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:30:48.131 [2024-12-07 04:15:30.802983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.802999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.803127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.803140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:30:48.131 [2024-12-07 04:15:30.803151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.803162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.803198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.803210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:30:48.131 [2024-12-07 04:15:30.803221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.803230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.803273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.803284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:30:48.131 [2024-12-07 04:15:30.803295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.803305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.803345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:30:48.131 [2024-12-07 04:15:30.803357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:30:48.131 [2024-12-07 04:15:30.803368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:30:48.131 [2024-12-07 04:15:30.803381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:30:48.131 [2024-12-07 04:15:30.803519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.834 ms, result 0 00:30:49.511 00:30:49.511 00:30:49.511 04:15:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:50.891 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:30:50.891 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:30:50.891 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:30:50.891 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:30:50.891 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81117 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81117 ']' 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81117 00:30:51.151 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81117) - No such process 00:30:51.151 Process with pid 81117 is not found 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81117 is not found' 00:30:51.151 04:15:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:30:51.411 Remove shared memory files 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:51.411 04:15:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:51.671 ************************************ 00:30:51.671 END TEST ftl_dirty_shutdown 00:30:51.671 ************************************ 00:30:51.671 00:30:51.671 real 3m40.347s 00:30:51.671 user 4m6.301s 00:30:51.671 sys 0m39.482s 00:30:51.671 04:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:51.671 04:15:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.671 04:15:34 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:51.671 04:15:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:51.671 04:15:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:51.671 04:15:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:51.671 ************************************ 00:30:51.671 START TEST ftl_upgrade_shutdown 00:30:51.671 ************************************ 00:30:51.671 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:30:51.671 * Looking for test storage... 00:30:51.671 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.671 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:51.671 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:51.671 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.934 --rc genhtml_branch_coverage=1 00:30:51.934 --rc genhtml_function_coverage=1 00:30:51.934 --rc genhtml_legend=1 00:30:51.934 --rc geninfo_all_blocks=1 00:30:51.934 --rc geninfo_unexecuted_blocks=1 00:30:51.934 00:30:51.934 ' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.934 --rc genhtml_branch_coverage=1 00:30:51.934 --rc genhtml_function_coverage=1 00:30:51.934 --rc genhtml_legend=1 00:30:51.934 --rc geninfo_all_blocks=1 00:30:51.934 --rc geninfo_unexecuted_blocks=1 00:30:51.934 00:30:51.934 ' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.934 --rc genhtml_branch_coverage=1 00:30:51.934 --rc genhtml_function_coverage=1 00:30:51.934 --rc genhtml_legend=1 00:30:51.934 --rc geninfo_all_blocks=1 00:30:51.934 --rc geninfo_unexecuted_blocks=1 00:30:51.934 00:30:51.934 ' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:51.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:51.934 --rc genhtml_branch_coverage=1 00:30:51.934 --rc genhtml_function_coverage=1 00:30:51.934 --rc genhtml_legend=1 00:30:51.934 --rc geninfo_all_blocks=1 00:30:51.934 --rc geninfo_unexecuted_blocks=1 00:30:51.934 00:30:51.934 ' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83479 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83479 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83479 ']' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.934 04:15:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:51.934 [2024-12-07 04:15:34.600874] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:30:51.934 [2024-12-07 04:15:34.601022] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83479 ] 00:30:52.194 [2024-12-07 04:15:34.779567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.194 [2024-12-07 04:15:34.888397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:53.132 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:30:53.133 04:15:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:53.392 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:30:53.652 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:53.652 { 00:30:53.652 "name": "basen1", 00:30:53.652 "aliases": [ 00:30:53.652 "5f3744a3-57ea-4e65-a8a8-6099dc302b3b" 00:30:53.652 ], 00:30:53.652 "product_name": "NVMe disk", 00:30:53.652 "block_size": 4096, 00:30:53.652 "num_blocks": 1310720, 00:30:53.652 "uuid": "5f3744a3-57ea-4e65-a8a8-6099dc302b3b", 00:30:53.652 "numa_id": -1, 00:30:53.652 "assigned_rate_limits": { 00:30:53.652 "rw_ios_per_sec": 0, 00:30:53.652 "rw_mbytes_per_sec": 0, 00:30:53.652 "r_mbytes_per_sec": 0, 00:30:53.652 "w_mbytes_per_sec": 0 00:30:53.652 }, 00:30:53.652 "claimed": true, 00:30:53.652 "claim_type": "read_many_write_one", 00:30:53.652 "zoned": false, 00:30:53.652 "supported_io_types": { 00:30:53.652 "read": true, 00:30:53.652 "write": true, 00:30:53.652 "unmap": true, 00:30:53.652 "flush": true, 00:30:53.652 "reset": true, 00:30:53.652 "nvme_admin": true, 00:30:53.652 "nvme_io": true, 00:30:53.652 "nvme_io_md": false, 00:30:53.652 "write_zeroes": true, 00:30:53.652 "zcopy": false, 00:30:53.652 "get_zone_info": false, 00:30:53.652 "zone_management": false, 00:30:53.652 "zone_append": false, 00:30:53.652 "compare": true, 00:30:53.652 "compare_and_write": false, 00:30:53.652 "abort": true, 00:30:53.652 "seek_hole": false, 00:30:53.652 "seek_data": false, 00:30:53.652 "copy": true, 00:30:53.652 "nvme_iov_md": false 00:30:53.652 }, 00:30:53.652 "driver_specific": { 00:30:53.652 "nvme": [ 00:30:53.652 { 00:30:53.652 "pci_address": "0000:00:11.0", 00:30:53.652 "trid": { 00:30:53.652 "trtype": "PCIe", 00:30:53.652 "traddr": "0000:00:11.0" 00:30:53.652 }, 00:30:53.652 "ctrlr_data": { 00:30:53.652 "cntlid": 0, 00:30:53.652 "vendor_id": "0x1b36", 00:30:53.652 "model_number": "QEMU NVMe Ctrl", 00:30:53.652 "serial_number": "12341", 00:30:53.652 "firmware_revision": "8.0.0", 00:30:53.652 "subnqn": "nqn.2019-08.org.qemu:12341", 00:30:53.652 "oacs": { 00:30:53.653 "security": 0, 00:30:53.653 "format": 1, 00:30:53.653 "firmware": 0, 00:30:53.653 "ns_manage": 1 00:30:53.653 }, 00:30:53.653 "multi_ctrlr": false, 00:30:53.653 "ana_reporting": false 00:30:53.653 }, 00:30:53.653 "vs": { 00:30:53.653 "nvme_version": "1.4" 00:30:53.653 }, 00:30:53.653 "ns_data": { 00:30:53.653 "id": 1, 00:30:53.653 "can_share": false 00:30:53.653 } 00:30:53.653 } 00:30:53.653 ], 00:30:53.653 "mp_policy": "active_passive" 00:30:53.653 } 00:30:53.653 } 00:30:53.653 ]' 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:53.653 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:30:53.913 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=dd70bdd4-aa90-4f31-9eb7-498c14de76da 00:30:53.913 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:30:53.913 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dd70bdd4-aa90-4f31-9eb7-498c14de76da 00:30:54.173 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:30:54.433 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=026dc05d-11a1-41c4-b015-b3a97c48977a 00:30:54.433 04:15:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 026dc05d-11a1-41c4-b015-b3a97c48977a 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=6ec0eae4-2f70-4354-9e6f-222c7e1f7559 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 6ec0eae4-2f70-4354-9e6f-222c7e1f7559 ]] 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 6ec0eae4-2f70-4354-9e6f-222c7e1f7559 5120 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=6ec0eae4-2f70-4354-9e6f-222c7e1f7559 00:30:54.693 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 6ec0eae4-2f70-4354-9e6f-222c7e1f7559 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=6ec0eae4-2f70-4354-9e6f-222c7e1f7559 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6ec0eae4-2f70-4354-9e6f-222c7e1f7559 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:30:54.694 { 00:30:54.694 "name": "6ec0eae4-2f70-4354-9e6f-222c7e1f7559", 00:30:54.694 "aliases": [ 00:30:54.694 "lvs/basen1p0" 00:30:54.694 ], 00:30:54.694 "product_name": "Logical Volume", 00:30:54.694 "block_size": 4096, 00:30:54.694 "num_blocks": 5242880, 00:30:54.694 "uuid": "6ec0eae4-2f70-4354-9e6f-222c7e1f7559", 00:30:54.694 "assigned_rate_limits": { 00:30:54.694 "rw_ios_per_sec": 0, 00:30:54.694 "rw_mbytes_per_sec": 0, 00:30:54.694 "r_mbytes_per_sec": 0, 00:30:54.694 "w_mbytes_per_sec": 0 00:30:54.694 }, 00:30:54.694 "claimed": false, 00:30:54.694 "zoned": false, 00:30:54.694 "supported_io_types": { 00:30:54.694 "read": true, 00:30:54.694 "write": true, 00:30:54.694 "unmap": true, 00:30:54.694 "flush": false, 00:30:54.694 "reset": true, 00:30:54.694 "nvme_admin": false, 00:30:54.694 "nvme_io": false, 00:30:54.694 "nvme_io_md": false, 00:30:54.694 "write_zeroes": true, 00:30:54.694 "zcopy": false, 00:30:54.694 "get_zone_info": false, 00:30:54.694 "zone_management": false, 00:30:54.694 "zone_append": false, 00:30:54.694 "compare": false, 00:30:54.694 "compare_and_write": false, 00:30:54.694 "abort": false, 00:30:54.694 "seek_hole": true, 00:30:54.694 "seek_data": true, 00:30:54.694 "copy": false, 00:30:54.694 "nvme_iov_md": false 00:30:54.694 }, 00:30:54.694 "driver_specific": { 00:30:54.694 "lvol": { 00:30:54.694 "lvol_store_uuid": "026dc05d-11a1-41c4-b015-b3a97c48977a", 00:30:54.694 "base_bdev": "basen1", 00:30:54.694 "thin_provision": true, 00:30:54.694 "num_allocated_clusters": 0, 00:30:54.694 "snapshot": false, 00:30:54.694 "clone": false, 00:30:54.694 "esnap_clone": false 00:30:54.694 } 00:30:54.694 } 00:30:54.694 } 00:30:54.694 ]' 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:30:54.694 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:30:54.954 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:30:54.954 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:30:54.954 04:15:37 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:30:54.954 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:30:54.954 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:30:54.954 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:30:55.213 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:30:55.213 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:30:55.213 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:30:55.472 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:30:55.472 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:30:55.472 04:15:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 6ec0eae4-2f70-4354-9e6f-222c7e1f7559 -c cachen1p0 --l2p_dram_limit 2 00:30:55.472 [2024-12-07 04:15:38.144321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.472 [2024-12-07 04:15:38.144374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:55.472 [2024-12-07 04:15:38.144393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:55.473 [2024-12-07 04:15:38.144404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.144473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.144485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:55.473 [2024-12-07 04:15:38.144498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:30:55.473 [2024-12-07 04:15:38.144508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.144532] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:55.473 [2024-12-07 04:15:38.145664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:55.473 [2024-12-07 04:15:38.145700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.145711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:55.473 [2024-12-07 04:15:38.145727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.171 ms 00:30:55.473 [2024-12-07 04:15:38.145738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.145817] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5e14d250-9d2a-4837-8352-f4be54806733 00:30:55.473 [2024-12-07 04:15:38.147324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.147364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:30:55.473 [2024-12-07 04:15:38.147377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:55.473 [2024-12-07 04:15:38.147391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.155181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.155219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:55.473 [2024-12-07 04:15:38.155231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.761 ms 00:30:55.473 [2024-12-07 04:15:38.155244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.155291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.155307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:55.473 [2024-12-07 04:15:38.155319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:55.473 [2024-12-07 04:15:38.155334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.155403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.155419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:55.473 [2024-12-07 04:15:38.155433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:55.473 [2024-12-07 04:15:38.155446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.155472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:55.473 [2024-12-07 04:15:38.160230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.160258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:55.473 [2024-12-07 04:15:38.160274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.771 ms 00:30:55.473 [2024-12-07 04:15:38.160299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.160333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.160344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:55.473 [2024-12-07 04:15:38.160357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:55.473 [2024-12-07 04:15:38.160366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.160403] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:30:55.473 [2024-12-07 04:15:38.160533] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:55.473 [2024-12-07 04:15:38.160552] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:55.473 [2024-12-07 04:15:38.160565] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:55.473 [2024-12-07 04:15:38.160581] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:55.473 [2024-12-07 04:15:38.160603] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:55.473 [2024-12-07 04:15:38.160617] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:55.473 [2024-12-07 04:15:38.160626] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:55.473 [2024-12-07 04:15:38.160642] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:55.473 [2024-12-07 04:15:38.160652] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:55.473 [2024-12-07 04:15:38.160663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.160673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:55.473 [2024-12-07 04:15:38.160685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:30:55.473 [2024-12-07 04:15:38.160695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.160766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.473 [2024-12-07 04:15:38.160786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:55.473 [2024-12-07 04:15:38.160799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:55.473 [2024-12-07 04:15:38.160808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.473 [2024-12-07 04:15:38.160898] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:55.473 [2024-12-07 04:15:38.160910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:55.473 [2024-12-07 04:15:38.160922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:55.473 [2024-12-07 04:15:38.160932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.473 [2024-12-07 04:15:38.160945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:55.473 [2024-12-07 04:15:38.160984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:55.473 [2024-12-07 04:15:38.160997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:55.473 [2024-12-07 04:15:38.161006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:55.473 [2024-12-07 04:15:38.161018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:55.473 [2024-12-07 04:15:38.161027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.473 [2024-12-07 04:15:38.161040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:55.473 [2024-12-07 04:15:38.161050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:55.473 [2024-12-07 04:15:38.161063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.473 [2024-12-07 04:15:38.161073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:55.473 [2024-12-07 04:15:38.161084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:55.473 [2024-12-07 04:15:38.161093] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.473 [2024-12-07 04:15:38.161107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:55.473 [2024-12-07 04:15:38.161116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:55.473 [2024-12-07 04:15:38.161128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.473 [2024-12-07 04:15:38.161137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:55.473 [2024-12-07 04:15:38.161149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:55.473 [2024-12-07 04:15:38.161158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:55.473 [2024-12-07 04:15:38.161169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:55.473 [2024-12-07 04:15:38.161179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:55.473 [2024-12-07 04:15:38.161190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:55.473 [2024-12-07 04:15:38.161199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:55.474 [2024-12-07 04:15:38.161210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:55.474 [2024-12-07 04:15:38.161219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:55.474 [2024-12-07 04:15:38.161231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:55.474 [2024-12-07 04:15:38.161241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:55.474 [2024-12-07 04:15:38.161252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:55.474 [2024-12-07 04:15:38.161261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:55.474 [2024-12-07 04:15:38.161275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:55.474 [2024-12-07 04:15:38.161284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.474 [2024-12-07 04:15:38.161295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:55.474 [2024-12-07 04:15:38.161304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:55.474 [2024-12-07 04:15:38.161316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.474 [2024-12-07 04:15:38.161325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:55.474 [2024-12-07 04:15:38.161337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:55.474 [2024-12-07 04:15:38.161346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.474 [2024-12-07 04:15:38.161357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:55.474 [2024-12-07 04:15:38.161366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:55.474 [2024-12-07 04:15:38.161377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.474 [2024-12-07 04:15:38.161385] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:55.474 [2024-12-07 04:15:38.161399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:55.474 [2024-12-07 04:15:38.161409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:55.474 [2024-12-07 04:15:38.161421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:55.474 [2024-12-07 04:15:38.161432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:55.474 [2024-12-07 04:15:38.161446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:55.474 [2024-12-07 04:15:38.161455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:55.474 [2024-12-07 04:15:38.161475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:55.474 [2024-12-07 04:15:38.161484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:55.474 [2024-12-07 04:15:38.161498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:55.474 [2024-12-07 04:15:38.161509] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:55.474 [2024-12-07 04:15:38.161527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:55.474 [2024-12-07 04:15:38.161552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:55.474 [2024-12-07 04:15:38.161587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:55.474 [2024-12-07 04:15:38.161599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:55.474 [2024-12-07 04:15:38.161610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:55.474 [2024-12-07 04:15:38.161624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:55.474 [2024-12-07 04:15:38.161703] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:55.474 [2024-12-07 04:15:38.161716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:55.474 [2024-12-07 04:15:38.161740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:55.474 [2024-12-07 04:15:38.161750] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:55.474 [2024-12-07 04:15:38.161764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:55.474 [2024-12-07 04:15:38.161775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:55.474 [2024-12-07 04:15:38.161789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:55.474 [2024-12-07 04:15:38.161799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.932 ms 00:30:55.474 [2024-12-07 04:15:38.161811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:55.474 [2024-12-07 04:15:38.161851] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:55.474 [2024-12-07 04:15:38.161870] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:59.669 [2024-12-07 04:15:41.835446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.835516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:59.669 [2024-12-07 04:15:41.835549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3679.557 ms 00:30:59.669 [2024-12-07 04:15:41.835563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.872771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.872826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:59.669 [2024-12-07 04:15:41.872842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.912 ms 00:30:59.669 [2024-12-07 04:15:41.872856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.872949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.872981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:59.669 [2024-12-07 04:15:41.872993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:59.669 [2024-12-07 04:15:41.873012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.916979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.917026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:59.669 [2024-12-07 04:15:41.917040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.985 ms 00:30:59.669 [2024-12-07 04:15:41.917053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.917085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.917102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:59.669 [2024-12-07 04:15:41.917113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:59.669 [2024-12-07 04:15:41.917124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.917612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.917629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:59.669 [2024-12-07 04:15:41.917648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:30:59.669 [2024-12-07 04:15:41.917661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.917697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.917710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:59.669 [2024-12-07 04:15:41.917723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:59.669 [2024-12-07 04:15:41.917737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.938245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.938289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:59.669 [2024-12-07 04:15:41.938305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.522 ms 00:30:59.669 [2024-12-07 04:15:41.938317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:41.979565] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:59.669 [2024-12-07 04:15:41.981034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:41.981073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:59.669 [2024-12-07 04:15:41.981096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.677 ms 00:30:59.669 [2024-12-07 04:15:41.981111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:42.016810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:42.016848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:30:59.669 [2024-12-07 04:15:42.016865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.705 ms 00:30:59.669 [2024-12-07 04:15:42.016892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:42.016993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.669 [2024-12-07 04:15:42.017010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:59.669 [2024-12-07 04:15:42.017026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:59.669 [2024-12-07 04:15:42.017037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.669 [2024-12-07 04:15:42.052396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.052443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:30:59.670 [2024-12-07 04:15:42.052461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.328 ms 00:30:59.670 [2024-12-07 04:15:42.052471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.086854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.086892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:30:59.670 [2024-12-07 04:15:42.086908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.392 ms 00:30:59.670 [2024-12-07 04:15:42.086918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.087621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.087651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:59.670 [2024-12-07 04:15:42.087666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:30:59.670 [2024-12-07 04:15:42.087680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.188401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.188557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:30:59.670 [2024-12-07 04:15:42.188604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.825 ms 00:30:59.670 [2024-12-07 04:15:42.188615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.225754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.225940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:30:59.670 [2024-12-07 04:15:42.225968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.114 ms 00:30:59.670 [2024-12-07 04:15:42.225979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.260914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.260968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:30:59.670 [2024-12-07 04:15:42.260986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.945 ms 00:30:59.670 [2024-12-07 04:15:42.260996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.296289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.296327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:59.670 [2024-12-07 04:15:42.296343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.286 ms 00:30:59.670 [2024-12-07 04:15:42.296353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.296400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.296412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:59.670 [2024-12-07 04:15:42.296428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:59.670 [2024-12-07 04:15:42.296438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.296534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.670 [2024-12-07 04:15:42.296549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:59.670 [2024-12-07 04:15:42.296562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:30:59.670 [2024-12-07 04:15:42.296572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.670 [2024-12-07 04:15:42.297706] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4159.701 ms, result 0 00:30:59.670 { 00:30:59.670 "name": "ftl", 00:30:59.670 "uuid": "5e14d250-9d2a-4837-8352-f4be54806733" 00:30:59.670 } 00:30:59.670 04:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:30:59.930 [2024-12-07 04:15:42.512454] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.930 04:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:31:00.189 04:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:31:00.189 [2024-12-07 04:15:42.908271] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:00.449 04:15:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:31:00.449 [2024-12-07 04:15:43.114028] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:00.449 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:31:01.018 Fill FTL, iteration 1 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83601 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83601 /var/tmp/spdk.tgt.sock 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83601 ']' 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:01.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:01.018 04:15:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:01.018 [2024-12-07 04:15:43.610205] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:01.018 [2024-12-07 04:15:43.610323] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83601 ] 00:31:01.277 [2024-12-07 04:15:43.788538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.277 [2024-12-07 04:15:43.909337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:02.239 04:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:02.239 04:15:44 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:02.239 04:15:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:31:02.508 ftln1 00:31:02.508 04:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:31:02.508 04:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83601 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83601 ']' 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83601 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83601 00:31:02.770 killing process with pid 83601 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83601' 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83601 00:31:02.770 04:15:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83601 00:31:05.303 04:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:31:05.303 04:15:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:31:05.303 [2024-12-07 04:15:47.631104] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:05.303 [2024-12-07 04:15:47.631368] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83659 ] 00:31:05.303 [2024-12-07 04:15:47.812493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.303 [2024-12-07 04:15:47.921246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:06.683  [2024-12-07T04:15:50.802Z] Copying: 252/1024 [MB] (252 MBps) [2024-12-07T04:15:51.372Z] Copying: 506/1024 [MB] (254 MBps) [2024-12-07T04:15:52.750Z] Copying: 757/1024 [MB] (251 MBps) [2024-12-07T04:15:52.750Z] Copying: 1009/1024 [MB] (252 MBps) [2024-12-07T04:15:53.688Z] Copying: 1024/1024 [MB] (average 252 MBps) 00:31:10.952 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:31:10.952 Calculate MD5 checksum, iteration 1 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:10.952 04:15:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:10.952 [2024-12-07 04:15:53.675529] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:10.952 [2024-12-07 04:15:53.675675] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83719 ] 00:31:11.212 [2024-12-07 04:15:53.856022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:11.471 [2024-12-07 04:15:53.972690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.868  [2024-12-07T04:15:56.176Z] Copying: 593/1024 [MB] (593 MBps) [2024-12-07T04:15:57.115Z] Copying: 1024/1024 [MB] (average 588 MBps) 00:31:14.379 00:31:14.379 04:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:31:14.379 04:15:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:16.327 Fill FTL, iteration 2 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c0cdec20f173ef428831bf09cc3b9d29 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:16.327 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:16.328 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:16.328 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:16.328 04:15:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:31:16.328 [2024-12-07 04:15:58.852569] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:16.328 [2024-12-07 04:15:58.852836] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83780 ] 00:31:16.328 [2024-12-07 04:15:59.034508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.587 [2024-12-07 04:15:59.147658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:17.993  [2024-12-07T04:16:01.666Z] Copying: 249/1024 [MB] (249 MBps) [2024-12-07T04:16:02.603Z] Copying: 504/1024 [MB] (255 MBps) [2024-12-07T04:16:03.983Z] Copying: 764/1024 [MB] (260 MBps) [2024-12-07T04:16:03.983Z] Copying: 1018/1024 [MB] (254 MBps) [2024-12-07T04:16:04.924Z] Copying: 1024/1024 [MB] (average 254 MBps) 00:31:22.188 00:31:22.188 Calculate MD5 checksum, iteration 2 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:22.188 04:16:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:22.188 [2024-12-07 04:16:04.878986] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:22.188 [2024-12-07 04:16:04.879252] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83841 ] 00:31:22.448 [2024-12-07 04:16:05.060036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.448 [2024-12-07 04:16:05.173729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:24.353  [2024-12-07T04:16:07.656Z] Copying: 591/1024 [MB] (591 MBps) [2024-12-07T04:16:09.037Z] Copying: 1024/1024 [MB] (average 589 MBps) 00:31:26.301 00:31:26.301 04:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:31:26.301 04:16:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:28.210 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:31:28.210 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=732d0b9698a8be1e026d57b0b6645593 00:31:28.210 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:31:28.210 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:31:28.210 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:28.210 [2024-12-07 04:16:10.742669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.210 [2024-12-07 04:16:10.742757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:28.210 [2024-12-07 04:16:10.742779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:28.210 [2024-12-07 04:16:10.742793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.210 [2024-12-07 04:16:10.742825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.210 [2024-12-07 04:16:10.742844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:28.210 [2024-12-07 04:16:10.742858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:31:28.210 [2024-12-07 04:16:10.742871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.210 [2024-12-07 04:16:10.742896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.210 [2024-12-07 04:16:10.742910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:28.210 [2024-12-07 04:16:10.742923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:28.210 [2024-12-07 04:16:10.742952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.210 [2024-12-07 04:16:10.743038] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.358 ms, result 0 00:31:28.210 true 00:31:28.210 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:28.470 { 00:31:28.470 "name": "ftl", 00:31:28.470 "properties": [ 00:31:28.470 { 00:31:28.470 "name": "superblock_version", 00:31:28.470 "value": 5, 00:31:28.470 "read-only": true 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "name": "base_device", 00:31:28.470 "bands": [ 00:31:28.470 { 00:31:28.470 "id": 0, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 1, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 2, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 3, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 4, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 5, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 6, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 7, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 8, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 9, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 10, 00:31:28.470 "state": "FREE", 00:31:28.470 "validity": 0.0 00:31:28.470 }, 00:31:28.470 { 00:31:28.470 "id": 11, 00:31:28.470 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 12, 00:31:28.471 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 13, 00:31:28.471 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 14, 00:31:28.471 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 15, 00:31:28.471 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 16, 00:31:28.471 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 17, 00:31:28.471 "state": "FREE", 00:31:28.471 "validity": 0.0 00:31:28.471 } 00:31:28.471 ], 00:31:28.471 "read-only": true 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "name": "cache_device", 00:31:28.471 "type": "bdev", 00:31:28.471 "chunks": [ 00:31:28.471 { 00:31:28.471 "id": 0, 00:31:28.471 "state": "INACTIVE", 00:31:28.471 "utilization": 0.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 1, 00:31:28.471 "state": "CLOSED", 00:31:28.471 "utilization": 1.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 2, 00:31:28.471 "state": "CLOSED", 00:31:28.471 "utilization": 1.0 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 3, 00:31:28.471 "state": "OPEN", 00:31:28.471 "utilization": 0.001953125 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "id": 4, 00:31:28.471 "state": "OPEN", 00:31:28.471 "utilization": 0.0 00:31:28.471 } 00:31:28.471 ], 00:31:28.471 "read-only": true 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "name": "verbose_mode", 00:31:28.471 "value": true, 00:31:28.471 "unit": "", 00:31:28.471 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:28.471 }, 00:31:28.471 { 00:31:28.471 "name": "prep_upgrade_on_shutdown", 00:31:28.471 "value": false, 00:31:28.471 "unit": "", 00:31:28.471 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:28.471 } 00:31:28.471 ] 00:31:28.471 } 00:31:28.471 04:16:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:31:28.471 [2024-12-07 04:16:11.178556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.471 [2024-12-07 04:16:11.178813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:28.471 [2024-12-07 04:16:11.179024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:31:28.471 [2024-12-07 04:16:11.179046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.471 [2024-12-07 04:16:11.179093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.471 [2024-12-07 04:16:11.179109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:28.471 [2024-12-07 04:16:11.179122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.471 [2024-12-07 04:16:11.179133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.471 [2024-12-07 04:16:11.179158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.471 [2024-12-07 04:16:11.179171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:28.471 [2024-12-07 04:16:11.179183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:28.471 [2024-12-07 04:16:11.179195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.471 [2024-12-07 04:16:11.179268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.691 ms, result 0 00:31:28.471 true 00:31:28.731 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:31:28.731 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:28.731 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:28.731 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:31:28.731 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:31:28.731 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:28.990 [2024-12-07 04:16:11.606511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.990 [2024-12-07 04:16:11.606557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:28.990 [2024-12-07 04:16:11.606572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:28.990 [2024-12-07 04:16:11.606583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.990 [2024-12-07 04:16:11.606609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.991 [2024-12-07 04:16:11.606622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:28.991 [2024-12-07 04:16:11.606635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:28.991 [2024-12-07 04:16:11.606646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.991 [2024-12-07 04:16:11.606668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:28.991 [2024-12-07 04:16:11.606680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:28.991 [2024-12-07 04:16:11.606692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:28.991 [2024-12-07 04:16:11.606703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:28.991 [2024-12-07 04:16:11.606756] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.235 ms, result 0 00:31:28.991 true 00:31:28.991 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:29.251 { 00:31:29.251 "name": "ftl", 00:31:29.251 "properties": [ 00:31:29.251 { 00:31:29.251 "name": "superblock_version", 00:31:29.251 "value": 5, 00:31:29.251 "read-only": true 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "name": "base_device", 00:31:29.251 "bands": [ 00:31:29.251 { 00:31:29.251 "id": 0, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 1, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 2, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 3, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 4, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 5, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 6, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 7, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 8, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 9, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 10, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 11, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 12, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 13, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 14, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 15, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 16, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 17, 00:31:29.251 "state": "FREE", 00:31:29.251 "validity": 0.0 00:31:29.251 } 00:31:29.251 ], 00:31:29.251 "read-only": true 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "name": "cache_device", 00:31:29.251 "type": "bdev", 00:31:29.251 "chunks": [ 00:31:29.251 { 00:31:29.251 "id": 0, 00:31:29.251 "state": "INACTIVE", 00:31:29.251 "utilization": 0.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 1, 00:31:29.251 "state": "CLOSED", 00:31:29.251 "utilization": 1.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 2, 00:31:29.251 "state": "CLOSED", 00:31:29.251 "utilization": 1.0 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 3, 00:31:29.251 "state": "OPEN", 00:31:29.251 "utilization": 0.001953125 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "id": 4, 00:31:29.251 "state": "OPEN", 00:31:29.251 "utilization": 0.0 00:31:29.251 } 00:31:29.251 ], 00:31:29.251 "read-only": true 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "name": "verbose_mode", 00:31:29.251 "value": true, 00:31:29.251 "unit": "", 00:31:29.251 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:29.251 }, 00:31:29.251 { 00:31:29.251 "name": "prep_upgrade_on_shutdown", 00:31:29.251 "value": true, 00:31:29.251 "unit": "", 00:31:29.251 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:29.251 } 00:31:29.251 ] 00:31:29.251 } 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83479 ]] 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83479 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83479 ']' 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83479 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83479 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:29.251 killing process with pid 83479 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83479' 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83479 00:31:29.251 04:16:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83479 00:31:30.632 [2024-12-07 04:16:13.051371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:30.632 [2024-12-07 04:16:13.071557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.632 [2024-12-07 04:16:13.071606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:30.632 [2024-12-07 04:16:13.071626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:30.632 [2024-12-07 04:16:13.071638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:30.632 [2024-12-07 04:16:13.071666] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:30.632 [2024-12-07 04:16:13.076364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:30.632 [2024-12-07 04:16:13.076400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:30.632 [2024-12-07 04:16:13.076416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.685 ms 00:31:30.632 [2024-12-07 04:16:13.076435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.923259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.923341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:37.242 [2024-12-07 04:16:19.923367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6857.904 ms 00:31:37.242 [2024-12-07 04:16:19.923379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.924396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.924435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:37.242 [2024-12-07 04:16:19.924450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.996 ms 00:31:37.242 [2024-12-07 04:16:19.924462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.925326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.925355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:37.242 [2024-12-07 04:16:19.925370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.831 ms 00:31:37.242 [2024-12-07 04:16:19.925390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.940718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.940762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:37.242 [2024-12-07 04:16:19.940778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.311 ms 00:31:37.242 [2024-12-07 04:16:19.940791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.949740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.949784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:37.242 [2024-12-07 04:16:19.949800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.922 ms 00:31:37.242 [2024-12-07 04:16:19.949812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.949900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.949924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:37.242 [2024-12-07 04:16:19.949959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:31:37.242 [2024-12-07 04:16:19.949970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.242 [2024-12-07 04:16:19.964163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.242 [2024-12-07 04:16:19.964202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:37.242 [2024-12-07 04:16:19.964217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.195 ms 00:31:37.242 [2024-12-07 04:16:19.964229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.502 [2024-12-07 04:16:19.978160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.502 [2024-12-07 04:16:19.978200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:37.502 [2024-12-07 04:16:19.978214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.914 ms 00:31:37.502 [2024-12-07 04:16:19.978225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.502 [2024-12-07 04:16:19.992495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.502 [2024-12-07 04:16:19.992535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:37.502 [2024-12-07 04:16:19.992549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.253 ms 00:31:37.502 [2024-12-07 04:16:19.992561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.502 [2024-12-07 04:16:20.006534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.502 [2024-12-07 04:16:20.006575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:37.502 [2024-12-07 04:16:20.006591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.908 ms 00:31:37.502 [2024-12-07 04:16:20.006602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.502 [2024-12-07 04:16:20.006643] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:37.502 [2024-12-07 04:16:20.006680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:37.502 [2024-12-07 04:16:20.006695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:37.502 [2024-12-07 04:16:20.006710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:37.502 [2024-12-07 04:16:20.006723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:37.502 [2024-12-07 04:16:20.006827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:37.503 [2024-12-07 04:16:20.006919] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:37.503 [2024-12-07 04:16:20.006945] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5e14d250-9d2a-4837-8352-f4be54806733 00:31:37.503 [2024-12-07 04:16:20.006959] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:37.503 [2024-12-07 04:16:20.006971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:31:37.503 [2024-12-07 04:16:20.006983] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:31:37.503 [2024-12-07 04:16:20.006997] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:31:37.503 [2024-12-07 04:16:20.007015] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:37.503 [2024-12-07 04:16:20.007028] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:37.503 [2024-12-07 04:16:20.007047] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:37.503 [2024-12-07 04:16:20.007058] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:37.503 [2024-12-07 04:16:20.007069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:37.503 [2024-12-07 04:16:20.007082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.503 [2024-12-07 04:16:20.007101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:37.503 [2024-12-07 04:16:20.007114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.440 ms 00:31:37.503 [2024-12-07 04:16:20.007127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.028616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.503 [2024-12-07 04:16:20.028657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:37.503 [2024-12-07 04:16:20.028692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.486 ms 00:31:37.503 [2024-12-07 04:16:20.028706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.029343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:37.503 [2024-12-07 04:16:20.029371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:37.503 [2024-12-07 04:16:20.029385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.610 ms 00:31:37.503 [2024-12-07 04:16:20.029398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.099510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.503 [2024-12-07 04:16:20.099563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:37.503 [2024-12-07 04:16:20.099580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.503 [2024-12-07 04:16:20.099593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.099644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.503 [2024-12-07 04:16:20.099657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:37.503 [2024-12-07 04:16:20.099670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.503 [2024-12-07 04:16:20.099682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.099808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.503 [2024-12-07 04:16:20.099824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:37.503 [2024-12-07 04:16:20.099844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.503 [2024-12-07 04:16:20.099857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.099881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.503 [2024-12-07 04:16:20.099895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:37.503 [2024-12-07 04:16:20.099907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.503 [2024-12-07 04:16:20.099919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.503 [2024-12-07 04:16:20.229664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.503 [2024-12-07 04:16:20.229746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:37.503 [2024-12-07 04:16:20.229774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.503 [2024-12-07 04:16:20.229786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.330987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:37.763 [2024-12-07 04:16:20.331066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.331235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:37.763 [2024-12-07 04:16:20.331264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.331351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:37.763 [2024-12-07 04:16:20.331379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.331528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:37.763 [2024-12-07 04:16:20.331557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.331622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:37.763 [2024-12-07 04:16:20.331649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.331714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:37.763 [2024-12-07 04:16:20.331750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.331833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:37.763 [2024-12-07 04:16:20.331848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:37.763 [2024-12-07 04:16:20.331861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:37.763 [2024-12-07 04:16:20.331873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:37.763 [2024-12-07 04:16:20.332058] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7272.243 ms, result 0 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84038 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84038 00:31:41.059 04:16:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84038 ']' 00:31:41.060 04:16:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:41.060 04:16:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:41.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:41.060 04:16:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:41.060 04:16:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:41.060 04:16:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:41.060 [2024-12-07 04:16:23.582428] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:41.060 [2024-12-07 04:16:23.582719] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84038 ] 00:31:41.060 [2024-12-07 04:16:23.766510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:41.319 [2024-12-07 04:16:23.894159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:42.258 [2024-12-07 04:16:24.942592] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:42.258 [2024-12-07 04:16:24.942687] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:31:42.519 [2024-12-07 04:16:25.099520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.099573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:31:42.519 [2024-12-07 04:16:25.099592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:42.519 [2024-12-07 04:16:25.099604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.099671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.099686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:42.519 [2024-12-07 04:16:25.099699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:31:42.519 [2024-12-07 04:16:25.099710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.099744] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:31:42.519 [2024-12-07 04:16:25.100676] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:31:42.519 [2024-12-07 04:16:25.100711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.100724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:42.519 [2024-12-07 04:16:25.100736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.980 ms 00:31:42.519 [2024-12-07 04:16:25.100747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.103182] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:31:42.519 [2024-12-07 04:16:25.121749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.121791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:31:42.519 [2024-12-07 04:16:25.121815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.599 ms 00:31:42.519 [2024-12-07 04:16:25.121827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.121905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.121920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:31:42.519 [2024-12-07 04:16:25.121945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:31:42.519 [2024-12-07 04:16:25.121957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.133778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.133806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:42.519 [2024-12-07 04:16:25.133821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.752 ms 00:31:42.519 [2024-12-07 04:16:25.133832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.133906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.133921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:42.519 [2024-12-07 04:16:25.133952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:31:42.519 [2024-12-07 04:16:25.133964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.134031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.134051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:31:42.519 [2024-12-07 04:16:25.134063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:31:42.519 [2024-12-07 04:16:25.134075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.134105] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:31:42.519 [2024-12-07 04:16:25.139653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.139710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:42.519 [2024-12-07 04:16:25.139723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.564 ms 00:31:42.519 [2024-12-07 04:16:25.139741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.139779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.519 [2024-12-07 04:16:25.139792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:31:42.519 [2024-12-07 04:16:25.139805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:31:42.519 [2024-12-07 04:16:25.139817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.519 [2024-12-07 04:16:25.139862] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:31:42.519 [2024-12-07 04:16:25.139898] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:31:42.519 [2024-12-07 04:16:25.139950] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:31:42.519 [2024-12-07 04:16:25.139972] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:31:42.519 [2024-12-07 04:16:25.140064] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:31:42.519 [2024-12-07 04:16:25.140081] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:31:42.519 [2024-12-07 04:16:25.140096] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:31:42.519 [2024-12-07 04:16:25.140111] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:31:42.519 [2024-12-07 04:16:25.140125] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:31:42.519 [2024-12-07 04:16:25.140143] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:31:42.520 [2024-12-07 04:16:25.140154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:31:42.520 [2024-12-07 04:16:25.140167] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:31:42.520 [2024-12-07 04:16:25.140179] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:31:42.520 [2024-12-07 04:16:25.140192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.520 [2024-12-07 04:16:25.140203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:31:42.520 [2024-12-07 04:16:25.140215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.335 ms 00:31:42.520 [2024-12-07 04:16:25.140226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.520 [2024-12-07 04:16:25.140299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.520 [2024-12-07 04:16:25.140312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:31:42.520 [2024-12-07 04:16:25.140330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:31:42.520 [2024-12-07 04:16:25.140341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.520 [2024-12-07 04:16:25.140434] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:31:42.520 [2024-12-07 04:16:25.140450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:31:42.520 [2024-12-07 04:16:25.140463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:31:42.520 [2024-12-07 04:16:25.140497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:31:42.520 [2024-12-07 04:16:25.140520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:31:42.520 [2024-12-07 04:16:25.140533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:31:42.520 [2024-12-07 04:16:25.140544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:31:42.520 [2024-12-07 04:16:25.140573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:31:42.520 [2024-12-07 04:16:25.140584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:31:42.520 [2024-12-07 04:16:25.140606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:31:42.520 [2024-12-07 04:16:25.140617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:31:42.520 [2024-12-07 04:16:25.140638] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:31:42.520 [2024-12-07 04:16:25.140648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:31:42.520 [2024-12-07 04:16:25.140670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:31:42.520 [2024-12-07 04:16:25.140681] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:31:42.520 [2024-12-07 04:16:25.140715] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:31:42.520 [2024-12-07 04:16:25.140725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:31:42.520 [2024-12-07 04:16:25.140747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:31:42.520 [2024-12-07 04:16:25.140758] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:31:42.520 [2024-12-07 04:16:25.140780] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:31:42.520 [2024-12-07 04:16:25.140790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:31:42.520 [2024-12-07 04:16:25.140813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:31:42.520 [2024-12-07 04:16:25.140824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:31:42.520 [2024-12-07 04:16:25.140845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:31:42.520 [2024-12-07 04:16:25.140877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:31:42.520 [2024-12-07 04:16:25.140908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:31:42.520 [2024-12-07 04:16:25.140919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140943] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:31:42.520 [2024-12-07 04:16:25.140957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:31:42.520 [2024-12-07 04:16:25.140968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:31:42.520 [2024-12-07 04:16:25.140980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:31:42.520 [2024-12-07 04:16:25.140998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:31:42.520 [2024-12-07 04:16:25.141010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:31:42.520 [2024-12-07 04:16:25.141020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:31:42.520 [2024-12-07 04:16:25.141031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:31:42.520 [2024-12-07 04:16:25.141041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:31:42.520 [2024-12-07 04:16:25.141052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:31:42.520 [2024-12-07 04:16:25.141065] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:31:42.520 [2024-12-07 04:16:25.141079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:31:42.520 [2024-12-07 04:16:25.141104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:31:42.520 [2024-12-07 04:16:25.141138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:31:42.520 [2024-12-07 04:16:25.141150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:31:42.520 [2024-12-07 04:16:25.141161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:31:42.520 [2024-12-07 04:16:25.141173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:31:42.520 [2024-12-07 04:16:25.141253] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:31:42.520 [2024-12-07 04:16:25.141267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141279] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:42.520 [2024-12-07 04:16:25.141292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:31:42.520 [2024-12-07 04:16:25.141303] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:31:42.520 [2024-12-07 04:16:25.141316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:31:42.520 [2024-12-07 04:16:25.141329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:42.520 [2024-12-07 04:16:25.141340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:31:42.520 [2024-12-07 04:16:25.141351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.947 ms 00:31:42.520 [2024-12-07 04:16:25.141362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:42.520 [2024-12-07 04:16:25.141415] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:31:42.520 [2024-12-07 04:16:25.141430] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:31:45.848 [2024-12-07 04:16:28.470476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.470542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:31:45.848 [2024-12-07 04:16:28.470559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3334.466 ms 00:31:45.848 [2024-12-07 04:16:28.470571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.848 [2024-12-07 04:16:28.518644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.518687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:45.848 [2024-12-07 04:16:28.518705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.796 ms 00:31:45.848 [2024-12-07 04:16:28.518718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.848 [2024-12-07 04:16:28.518830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.518852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:31:45.848 [2024-12-07 04:16:28.518867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:45.848 [2024-12-07 04:16:28.518879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.848 [2024-12-07 04:16:28.570805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.570868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:45.848 [2024-12-07 04:16:28.570893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.913 ms 00:31:45.848 [2024-12-07 04:16:28.570905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.848 [2024-12-07 04:16:28.570967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.570981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:45.848 [2024-12-07 04:16:28.570995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:45.848 [2024-12-07 04:16:28.571007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.848 [2024-12-07 04:16:28.571811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.571828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:45.848 [2024-12-07 04:16:28.571842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.731 ms 00:31:45.848 [2024-12-07 04:16:28.571855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:45.848 [2024-12-07 04:16:28.571912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:45.848 [2024-12-07 04:16:28.571944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:45.848 [2024-12-07 04:16:28.571957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:31:45.848 [2024-12-07 04:16:28.571984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.108 [2024-12-07 04:16:28.597224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.108 [2024-12-07 04:16:28.597266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:46.108 [2024-12-07 04:16:28.597281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.250 ms 00:31:46.108 [2024-12-07 04:16:28.597294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.108 [2024-12-07 04:16:28.632070] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:46.108 [2024-12-07 04:16:28.632116] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:46.108 [2024-12-07 04:16:28.632136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.632148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:31:46.109 [2024-12-07 04:16:28.632162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.753 ms 00:31:46.109 [2024-12-07 04:16:28.632173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.651349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.651397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:31:46.109 [2024-12-07 04:16:28.651413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.150 ms 00:31:46.109 [2024-12-07 04:16:28.651426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.668561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.668604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:31:46.109 [2024-12-07 04:16:28.668620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.106 ms 00:31:46.109 [2024-12-07 04:16:28.668631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.685943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.686196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:31:46.109 [2024-12-07 04:16:28.686221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.291 ms 00:31:46.109 [2024-12-07 04:16:28.686235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.686921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.686977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:46.109 [2024-12-07 04:16:28.686993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:31:46.109 [2024-12-07 04:16:28.687007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.782781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.783120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:46.109 [2024-12-07 04:16:28.783151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 95.896 ms 00:31:46.109 [2024-12-07 04:16:28.783165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.794171] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:46.109 [2024-12-07 04:16:28.795386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.795419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:46.109 [2024-12-07 04:16:28.795436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.152 ms 00:31:46.109 [2024-12-07 04:16:28.795450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.795582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.795603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:31:46.109 [2024-12-07 04:16:28.795617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:31:46.109 [2024-12-07 04:16:28.795629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.795706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.795721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:46.109 [2024-12-07 04:16:28.795734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:31:46.109 [2024-12-07 04:16:28.795746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.795778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.795792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:46.109 [2024-12-07 04:16:28.795810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:46.109 [2024-12-07 04:16:28.795823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.795868] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:46.109 [2024-12-07 04:16:28.795884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.795896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:46.109 [2024-12-07 04:16:28.795908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:46.109 [2024-12-07 04:16:28.795919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.831130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.831182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:31:46.109 [2024-12-07 04:16:28.831199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.219 ms 00:31:46.109 [2024-12-07 04:16:28.831211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.831298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:46.109 [2024-12-07 04:16:28.831311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:46.109 [2024-12-07 04:16:28.831325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:31:46.109 [2024-12-07 04:16:28.831337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:46.109 [2024-12-07 04:16:28.832900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3738.859 ms, result 0 00:31:46.369 [2024-12-07 04:16:28.847526] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.369 [2024-12-07 04:16:28.863529] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:46.369 [2024-12-07 04:16:28.872521] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:46.938 04:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:46.938 04:16:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:46.938 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:46.938 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:46.938 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:31:47.198 [2024-12-07 04:16:29.740029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.198 [2024-12-07 04:16:29.740074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:31:47.198 [2024-12-07 04:16:29.740097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:31:47.198 [2024-12-07 04:16:29.740110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.198 [2024-12-07 04:16:29.740138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.198 [2024-12-07 04:16:29.740150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:31:47.198 [2024-12-07 04:16:29.740163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:47.198 [2024-12-07 04:16:29.740174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.198 [2024-12-07 04:16:29.740197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:47.198 [2024-12-07 04:16:29.740209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:31:47.198 [2024-12-07 04:16:29.740221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:47.199 [2024-12-07 04:16:29.740232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:47.199 [2024-12-07 04:16:29.740296] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.265 ms, result 0 00:31:47.199 true 00:31:47.199 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:47.458 { 00:31:47.458 "name": "ftl", 00:31:47.458 "properties": [ 00:31:47.458 { 00:31:47.458 "name": "superblock_version", 00:31:47.458 "value": 5, 00:31:47.458 "read-only": true 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "name": "base_device", 00:31:47.458 "bands": [ 00:31:47.458 { 00:31:47.458 "id": 0, 00:31:47.458 "state": "CLOSED", 00:31:47.458 "validity": 1.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 1, 00:31:47.458 "state": "CLOSED", 00:31:47.458 "validity": 1.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 2, 00:31:47.458 "state": "CLOSED", 00:31:47.458 "validity": 0.007843137254901933 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 3, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 4, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 5, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 6, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 7, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 8, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 9, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 10, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 11, 00:31:47.458 "state": "FREE", 00:31:47.458 "validity": 0.0 00:31:47.458 }, 00:31:47.458 { 00:31:47.458 "id": 12, 00:31:47.459 "state": "FREE", 00:31:47.459 "validity": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 13, 00:31:47.459 "state": "FREE", 00:31:47.459 "validity": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 14, 00:31:47.459 "state": "FREE", 00:31:47.459 "validity": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 15, 00:31:47.459 "state": "FREE", 00:31:47.459 "validity": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 16, 00:31:47.459 "state": "FREE", 00:31:47.459 "validity": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 17, 00:31:47.459 "state": "FREE", 00:31:47.459 "validity": 0.0 00:31:47.459 } 00:31:47.459 ], 00:31:47.459 "read-only": true 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "name": "cache_device", 00:31:47.459 "type": "bdev", 00:31:47.459 "chunks": [ 00:31:47.459 { 00:31:47.459 "id": 0, 00:31:47.459 "state": "INACTIVE", 00:31:47.459 "utilization": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 1, 00:31:47.459 "state": "OPEN", 00:31:47.459 "utilization": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 2, 00:31:47.459 "state": "OPEN", 00:31:47.459 "utilization": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 3, 00:31:47.459 "state": "FREE", 00:31:47.459 "utilization": 0.0 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "id": 4, 00:31:47.459 "state": "FREE", 00:31:47.459 "utilization": 0.0 00:31:47.459 } 00:31:47.459 ], 00:31:47.459 "read-only": true 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "name": "verbose_mode", 00:31:47.459 "value": true, 00:31:47.459 "unit": "", 00:31:47.459 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:31:47.459 }, 00:31:47.459 { 00:31:47.459 "name": "prep_upgrade_on_shutdown", 00:31:47.459 "value": false, 00:31:47.459 "unit": "", 00:31:47.459 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:31:47.459 } 00:31:47.459 ] 00:31:47.459 } 00:31:47.459 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:31:47.459 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:31:47.459 04:16:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:31:47.718 Validate MD5 checksum, iteration 1 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:47.718 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:47.719 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:47.719 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:47.719 04:16:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:47.978 [2024-12-07 04:16:30.526797] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:47.978 [2024-12-07 04:16:30.527364] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84128 ] 00:31:47.978 [2024-12-07 04:16:30.709835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.238 [2024-12-07 04:16:30.819592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.151  [2024-12-07T04:16:33.457Z] Copying: 589/1024 [MB] (589 MBps) [2024-12-07T04:16:34.832Z] Copying: 1024/1024 [MB] (average 583 MBps) 00:31:52.096 00:31:52.096 04:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:52.096 04:16:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c0cdec20f173ef428831bf09cc3b9d29 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c0cdec20f173ef428831bf09cc3b9d29 != \c\0\c\d\e\c\2\0\f\1\7\3\e\f\4\2\8\8\3\1\b\f\0\9\c\c\3\b\9\d\2\9 ]] 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:53.993 Validate MD5 checksum, iteration 2 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:53.993 04:16:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:53.993 [2024-12-07 04:16:36.564044] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:31:53.993 [2024-12-07 04:16:36.564533] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84190 ] 00:31:54.285 [2024-12-07 04:16:36.746429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:54.285 [2024-12-07 04:16:36.859124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.190  [2024-12-07T04:16:39.495Z] Copying: 567/1024 [MB] (567 MBps) [2024-12-07T04:16:40.874Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:31:58.138 00:31:58.138 04:16:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:58.138 04:16:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=732d0b9698a8be1e026d57b0b6645593 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 732d0b9698a8be1e026d57b0b6645593 != \7\3\2\d\0\b\9\6\9\8\a\8\b\e\1\e\0\2\6\d\5\7\b\0\b\6\6\4\5\5\9\3 ]] 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84038 ]] 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84038 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84260 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84260 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84260 ']' 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.048 04:16:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:00.048 [2024-12-07 04:16:42.415752] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:32:00.048 [2024-12-07 04:16:42.416048] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84260 ] 00:32:00.048 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84038 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:32:00.048 [2024-12-07 04:16:42.600411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:00.048 [2024-12-07 04:16:42.731346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:01.431 [2024-12-07 04:16:43.786432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:01.431 [2024-12-07 04:16:43.786529] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:32:01.431 [2024-12-07 04:16:43.935697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.935760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:32:01.431 [2024-12-07 04:16:43.935783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:32:01.431 [2024-12-07 04:16:43.935796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.935870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.935885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:01.431 [2024-12-07 04:16:43.935898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:32:01.431 [2024-12-07 04:16:43.935911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.935969] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:32:01.431 [2024-12-07 04:16:43.936969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:32:01.431 [2024-12-07 04:16:43.937215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.937233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:01.431 [2024-12-07 04:16:43.937249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.262 ms 00:32:01.431 [2024-12-07 04:16:43.937262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.937684] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:32:01.431 [2024-12-07 04:16:43.963717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.963779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:32:01.431 [2024-12-07 04:16:43.963797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.075 ms 00:32:01.431 [2024-12-07 04:16:43.963810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.978080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.978126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:32:01.431 [2024-12-07 04:16:43.978141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:32:01.431 [2024-12-07 04:16:43.978152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.978751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.978770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:01.431 [2024-12-07 04:16:43.978785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.507 ms 00:32:01.431 [2024-12-07 04:16:43.978799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.978875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.978892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:01.431 [2024-12-07 04:16:43.978905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:32:01.431 [2024-12-07 04:16:43.978917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.978979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.978993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:32:01.431 [2024-12-07 04:16:43.979008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:01.431 [2024-12-07 04:16:43.979020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.979067] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:32:01.431 [2024-12-07 04:16:43.982993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.983031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:01.431 [2024-12-07 04:16:43.983046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.938 ms 00:32:01.431 [2024-12-07 04:16:43.983059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.983100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.431 [2024-12-07 04:16:43.983115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:32:01.431 [2024-12-07 04:16:43.983129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:01.431 [2024-12-07 04:16:43.983141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.431 [2024-12-07 04:16:43.983187] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:32:01.431 [2024-12-07 04:16:43.983219] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:32:01.431 [2024-12-07 04:16:43.983258] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:32:01.431 [2024-12-07 04:16:43.983282] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:32:01.431 [2024-12-07 04:16:43.983377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:32:01.431 [2024-12-07 04:16:43.983394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:32:01.431 [2024-12-07 04:16:43.983409] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:32:01.431 [2024-12-07 04:16:43.983424] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:32:01.431 [2024-12-07 04:16:43.983438] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:32:01.431 [2024-12-07 04:16:43.983452] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:32:01.431 [2024-12-07 04:16:43.983464] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:32:01.431 [2024-12-07 04:16:43.983487] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:32:01.431 [2024-12-07 04:16:43.983498] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:32:01.432 [2024-12-07 04:16:43.983514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:43.983528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:32:01.432 [2024-12-07 04:16:43.983540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.330 ms 00:32:01.432 [2024-12-07 04:16:43.983552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.432 [2024-12-07 04:16:43.983623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:43.983644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:32:01.432 [2024-12-07 04:16:43.983656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:32:01.432 [2024-12-07 04:16:43.983668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.432 [2024-12-07 04:16:43.983758] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:32:01.432 [2024-12-07 04:16:43.983777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:32:01.432 [2024-12-07 04:16:43.983789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:01.432 [2024-12-07 04:16:43.983801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.983814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:32:01.432 [2024-12-07 04:16:43.983824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.983836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:32:01.432 [2024-12-07 04:16:43.983846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:32:01.432 [2024-12-07 04:16:43.983857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:32:01.432 [2024-12-07 04:16:43.983868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.983880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:32:01.432 [2024-12-07 04:16:43.983891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:32:01.432 [2024-12-07 04:16:43.983902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.983913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:32:01.432 [2024-12-07 04:16:43.983924] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:32:01.432 [2024-12-07 04:16:43.983935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.983970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:32:01.432 [2024-12-07 04:16:43.983982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:32:01.432 [2024-12-07 04:16:43.983993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:32:01.432 [2024-12-07 04:16:43.984014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:32:01.432 [2024-12-07 04:16:43.984040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:32:01.432 [2024-12-07 04:16:43.984062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:32:01.432 [2024-12-07 04:16:43.984073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:32:01.432 [2024-12-07 04:16:43.984095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:32:01.432 [2024-12-07 04:16:43.984105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:32:01.432 [2024-12-07 04:16:43.984127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:32:01.432 [2024-12-07 04:16:43.984137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:32:01.432 [2024-12-07 04:16:43.984158] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:32:01.432 [2024-12-07 04:16:43.984167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:32:01.432 [2024-12-07 04:16:43.984190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:32:01.432 [2024-12-07 04:16:43.984227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:32:01.432 [2024-12-07 04:16:43.984258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:32:01.432 [2024-12-07 04:16:43.984268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984280] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:32:01.432 [2024-12-07 04:16:43.984292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:32:01.432 [2024-12-07 04:16:43.984303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:32:01.432 [2024-12-07 04:16:43.984328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:32:01.432 [2024-12-07 04:16:43.984339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:32:01.432 [2024-12-07 04:16:43.984349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:32:01.432 [2024-12-07 04:16:43.984360] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:32:01.432 [2024-12-07 04:16:43.984370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:32:01.432 [2024-12-07 04:16:43.984381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:32:01.432 [2024-12-07 04:16:43.984393] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:32:01.432 [2024-12-07 04:16:43.984407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:32:01.432 [2024-12-07 04:16:43.984431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:32:01.432 [2024-12-07 04:16:43.984465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:32:01.432 [2024-12-07 04:16:43.984477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:32:01.432 [2024-12-07 04:16:43.984489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:32:01.432 [2024-12-07 04:16:43.984500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:32:01.432 [2024-12-07 04:16:43.984576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:32:01.432 [2024-12-07 04:16:43.984588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:01.432 [2024-12-07 04:16:43.984618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:32:01.432 [2024-12-07 04:16:43.984630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:32:01.432 [2024-12-07 04:16:43.984642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:32:01.432 [2024-12-07 04:16:43.984660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:43.984672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:32:01.432 [2024-12-07 04:16:43.984685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.954 ms 00:32:01.432 [2024-12-07 04:16:43.984696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.432 [2024-12-07 04:16:44.028130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:44.028169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:01.432 [2024-12-07 04:16:44.028186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 43.445 ms 00:32:01.432 [2024-12-07 04:16:44.028197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.432 [2024-12-07 04:16:44.028242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:44.028255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:32:01.432 [2024-12-07 04:16:44.028268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:32:01.432 [2024-12-07 04:16:44.028281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.432 [2024-12-07 04:16:44.074840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:44.075089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:01.432 [2024-12-07 04:16:44.075115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.547 ms 00:32:01.432 [2024-12-07 04:16:44.075130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.432 [2024-12-07 04:16:44.075174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.432 [2024-12-07 04:16:44.075188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:01.433 [2024-12-07 04:16:44.075202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:01.433 [2024-12-07 04:16:44.075222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.433 [2024-12-07 04:16:44.075355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.433 [2024-12-07 04:16:44.075370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:01.433 [2024-12-07 04:16:44.075383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:32:01.433 [2024-12-07 04:16:44.075394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.433 [2024-12-07 04:16:44.075443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.433 [2024-12-07 04:16:44.075456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:01.433 [2024-12-07 04:16:44.075468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:32:01.433 [2024-12-07 04:16:44.075480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.433 [2024-12-07 04:16:44.101751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.433 [2024-12-07 04:16:44.101788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:01.433 [2024-12-07 04:16:44.101803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.280 ms 00:32:01.433 [2024-12-07 04:16:44.101821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.433 [2024-12-07 04:16:44.101975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.433 [2024-12-07 04:16:44.101994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:32:01.433 [2024-12-07 04:16:44.102008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:32:01.433 [2024-12-07 04:16:44.102020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.433 [2024-12-07 04:16:44.154789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.433 [2024-12-07 04:16:44.154836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:32:01.433 [2024-12-07 04:16:44.154856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 52.801 ms 00:32:01.433 [2024-12-07 04:16:44.154869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.168863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.168904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:32:01.693 [2024-12-07 04:16:44.168942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.593 ms 00:32:01.693 [2024-12-07 04:16:44.168955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.261907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.262200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:32:01.693 [2024-12-07 04:16:44.262229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 93.031 ms 00:32:01.693 [2024-12-07 04:16:44.262242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.262552] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:32:01.693 [2024-12-07 04:16:44.262775] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:32:01.693 [2024-12-07 04:16:44.262980] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:32:01.693 [2024-12-07 04:16:44.263151] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:32:01.693 [2024-12-07 04:16:44.263170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.263183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:32:01.693 [2024-12-07 04:16:44.263197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.836 ms 00:32:01.693 [2024-12-07 04:16:44.263209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.263314] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:32:01.693 [2024-12-07 04:16:44.263332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.263350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:32:01.693 [2024-12-07 04:16:44.263363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:32:01.693 [2024-12-07 04:16:44.263377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.284001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.284053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:32:01.693 [2024-12-07 04:16:44.284069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.627 ms 00:32:01.693 [2024-12-07 04:16:44.284082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.296759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.296801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:32:01.693 [2024-12-07 04:16:44.296817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:32:01.693 [2024-12-07 04:16:44.296829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:01.693 [2024-12-07 04:16:44.296954] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:32:01.693 [2024-12-07 04:16:44.297289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:01.693 [2024-12-07 04:16:44.297303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:01.693 [2024-12-07 04:16:44.297315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.338 ms 00:32:01.693 [2024-12-07 04:16:44.297326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.262 [2024-12-07 04:16:44.877463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.262 [2024-12-07 04:16:44.877538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:02.262 [2024-12-07 04:16:44.877560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 579.970 ms 00:32:02.262 [2024-12-07 04:16:44.877573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.262 [2024-12-07 04:16:44.883233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.262 [2024-12-07 04:16:44.883283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:02.262 [2024-12-07 04:16:44.883299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.358 ms 00:32:02.262 [2024-12-07 04:16:44.883311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.262 [2024-12-07 04:16:44.883758] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:32:02.263 [2024-12-07 04:16:44.883791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.263 [2024-12-07 04:16:44.883803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:02.263 [2024-12-07 04:16:44.883818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.433 ms 00:32:02.263 [2024-12-07 04:16:44.883830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.263 [2024-12-07 04:16:44.883866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.263 [2024-12-07 04:16:44.883879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:02.263 [2024-12-07 04:16:44.883892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:32:02.263 [2024-12-07 04:16:44.883912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.263 [2024-12-07 04:16:44.883968] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 587.988 ms, result 0 00:32:02.263 [2024-12-07 04:16:44.884018] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:32:02.263 [2024-12-07 04:16:44.884240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.263 [2024-12-07 04:16:44.884467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:32:02.263 [2024-12-07 04:16:44.884490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.224 ms 00:32:02.263 [2024-12-07 04:16:44.884502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.461017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.461101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:32:02.835 [2024-12-07 04:16:45.461140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 576.187 ms 00:32:02.835 [2024-12-07 04:16:45.461152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.467018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.467063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:32:02.835 [2024-12-07 04:16:45.467080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.330 ms 00:32:02.835 [2024-12-07 04:16:45.467093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.467647] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:32:02.835 [2024-12-07 04:16:45.467680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.467693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:32:02.835 [2024-12-07 04:16:45.467707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.554 ms 00:32:02.835 [2024-12-07 04:16:45.467719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.467756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.467770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:32:02.835 [2024-12-07 04:16:45.467783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:02.835 [2024-12-07 04:16:45.467796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.467841] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 584.767 ms, result 0 00:32:02.835 [2024-12-07 04:16:45.467893] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:02.835 [2024-12-07 04:16:45.467908] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:32:02.835 [2024-12-07 04:16:45.467923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.467953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:32:02.835 [2024-12-07 04:16:45.467966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1172.920 ms 00:32:02.835 [2024-12-07 04:16:45.467977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.468014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.468035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:32:02.835 [2024-12-07 04:16:45.468048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:32:02.835 [2024-12-07 04:16:45.468060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.479924] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:32:02.835 [2024-12-07 04:16:45.480095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.480110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:32:02.835 [2024-12-07 04:16:45.480125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.033 ms 00:32:02.835 [2024-12-07 04:16:45.480136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.480710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.480734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:32:02.835 [2024-12-07 04:16:45.480753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.496 ms 00:32:02.835 [2024-12-07 04:16:45.480764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.482688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.482899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:32:02.835 [2024-12-07 04:16:45.482923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.904 ms 00:32:02.835 [2024-12-07 04:16:45.482953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.483008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.483021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:32:02.835 [2024-12-07 04:16:45.483033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:32:02.835 [2024-12-07 04:16:45.483053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.483162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.483177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:32:02.835 [2024-12-07 04:16:45.483189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:32:02.835 [2024-12-07 04:16:45.483200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.483227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.483239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:32:02.835 [2024-12-07 04:16:45.483252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:32:02.835 [2024-12-07 04:16:45.483263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.483311] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:32:02.835 [2024-12-07 04:16:45.483325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.835 [2024-12-07 04:16:45.483337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:32:02.835 [2024-12-07 04:16:45.483349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:32:02.835 [2024-12-07 04:16:45.483361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.835 [2024-12-07 04:16:45.483417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:02.836 [2024-12-07 04:16:45.483429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:32:02.836 [2024-12-07 04:16:45.483442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:32:02.836 [2024-12-07 04:16:45.483454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:02.836 [2024-12-07 04:16:45.484780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1551.085 ms, result 0 00:32:02.836 [2024-12-07 04:16:45.500457] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:02.836 [2024-12-07 04:16:45.516420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:32:02.836 [2024-12-07 04:16:45.526818] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:03.094 Validate MD5 checksum, iteration 1 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:03.094 04:16:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:32:03.094 [2024-12-07 04:16:45.668314] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:32:03.094 [2024-12-07 04:16:45.668633] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84296 ] 00:32:03.352 [2024-12-07 04:16:45.849654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.352 [2024-12-07 04:16:45.967744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.258  [2024-12-07T04:16:48.562Z] Copying: 561/1024 [MB] (561 MBps) [2024-12-07T04:16:49.940Z] Copying: 1024/1024 [MB] (average 560 MBps) 00:32:07.204 00:32:07.463 04:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:32:07.463 04:16:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:09.369 Validate MD5 checksum, iteration 2 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c0cdec20f173ef428831bf09cc3b9d29 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c0cdec20f173ef428831bf09cc3b9d29 != \c\0\c\d\e\c\2\0\f\1\7\3\e\f\4\2\8\8\3\1\b\f\0\9\c\c\3\b\9\d\2\9 ]] 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:32:09.369 04:16:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:32:09.369 [2024-12-07 04:16:51.718860] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:32:09.370 [2024-12-07 04:16:51.719141] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84363 ] 00:32:09.370 [2024-12-07 04:16:51.895315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.370 [2024-12-07 04:16:52.007382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.273  [2024-12-07T04:16:54.610Z] Copying: 567/1024 [MB] (567 MBps) [2024-12-07T04:16:55.985Z] Copying: 1024/1024 [MB] (average 563 MBps) 00:32:13.249 00:32:13.249 04:16:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:32:13.249 04:16:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=732d0b9698a8be1e026d57b0b6645593 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 732d0b9698a8be1e026d57b0b6645593 != \7\3\2\d\0\b\9\6\9\8\a\8\b\e\1\e\0\2\6\d\5\7\b\0\b\6\6\4\5\5\9\3 ]] 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84260 ]] 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84260 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84260 ']' 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84260 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84260 00:32:15.151 killing process with pid 84260 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84260' 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84260 00:32:15.151 04:16:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84260 00:32:16.092 [2024-12-07 04:16:58.801908] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:32:16.092 [2024-12-07 04:16:58.823498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.092 [2024-12-07 04:16:58.823551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:32:16.092 [2024-12-07 04:16:58.823572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:32:16.092 [2024-12-07 04:16:58.823585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.092 [2024-12-07 04:16:58.823614] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:32:16.353 [2024-12-07 04:16:58.828061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.828104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:32:16.353 [2024-12-07 04:16:58.828120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.433 ms 00:32:16.353 [2024-12-07 04:16:58.828132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.828367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.828384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:32:16.353 [2024-12-07 04:16:58.828397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.203 ms 00:32:16.353 [2024-12-07 04:16:58.828409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.829590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.829631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:32:16.353 [2024-12-07 04:16:58.829646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.161 ms 00:32:16.353 [2024-12-07 04:16:58.829664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.830579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.830614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:32:16.353 [2024-12-07 04:16:58.830629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.878 ms 00:32:16.353 [2024-12-07 04:16:58.830642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.845091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.845133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:32:16.353 [2024-12-07 04:16:58.845156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.426 ms 00:32:16.353 [2024-12-07 04:16:58.845168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.852955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.852996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:32:16.353 [2024-12-07 04:16:58.853011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.754 ms 00:32:16.353 [2024-12-07 04:16:58.853024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.853124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.853140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:32:16.353 [2024-12-07 04:16:58.853153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:32:16.353 [2024-12-07 04:16:58.853172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.867343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.867382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:32:16.353 [2024-12-07 04:16:58.867397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.172 ms 00:32:16.353 [2024-12-07 04:16:58.867408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.881822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.881861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:32:16.353 [2024-12-07 04:16:58.881875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.394 ms 00:32:16.353 [2024-12-07 04:16:58.881887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.895889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.895935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:32:16.353 [2024-12-07 04:16:58.895950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.984 ms 00:32:16.353 [2024-12-07 04:16:58.895961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.909911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.353 [2024-12-07 04:16:58.909955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:32:16.353 [2024-12-07 04:16:58.909970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.887 ms 00:32:16.353 [2024-12-07 04:16:58.909981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.353 [2024-12-07 04:16:58.910021] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:32:16.354 [2024-12-07 04:16:58.910041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:32:16.354 [2024-12-07 04:16:58.910056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:32:16.354 [2024-12-07 04:16:58.910069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:32:16.354 [2024-12-07 04:16:58.910082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:16.354 [2024-12-07 04:16:58.910268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:32:16.354 [2024-12-07 04:16:58.910280] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5e14d250-9d2a-4837-8352-f4be54806733 00:32:16.354 [2024-12-07 04:16:58.910293] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:32:16.354 [2024-12-07 04:16:58.910304] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:32:16.354 [2024-12-07 04:16:58.910315] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:32:16.354 [2024-12-07 04:16:58.910327] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:32:16.354 [2024-12-07 04:16:58.910338] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:32:16.354 [2024-12-07 04:16:58.910350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:32:16.354 [2024-12-07 04:16:58.910378] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:32:16.354 [2024-12-07 04:16:58.910389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:32:16.354 [2024-12-07 04:16:58.910400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:32:16.354 [2024-12-07 04:16:58.910413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.354 [2024-12-07 04:16:58.910427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:32:16.354 [2024-12-07 04:16:58.910441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:32:16.354 [2024-12-07 04:16:58.910452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.354 [2024-12-07 04:16:58.930760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.354 [2024-12-07 04:16:58.930799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:32:16.354 [2024-12-07 04:16:58.930814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.304 ms 00:32:16.354 [2024-12-07 04:16:58.930827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.354 [2024-12-07 04:16:58.931459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:32:16.354 [2024-12-07 04:16:58.931479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:32:16.354 [2024-12-07 04:16:58.931492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.600 ms 00:32:16.354 [2024-12-07 04:16:58.931504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.354 [2024-12-07 04:16:59.000064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.354 [2024-12-07 04:16:59.000105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:32:16.354 [2024-12-07 04:16:59.000121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.354 [2024-12-07 04:16:59.000140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.354 [2024-12-07 04:16:59.000179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.354 [2024-12-07 04:16:59.000192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:32:16.354 [2024-12-07 04:16:59.000205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.354 [2024-12-07 04:16:59.000218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.354 [2024-12-07 04:16:59.000309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.354 [2024-12-07 04:16:59.000325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:32:16.354 [2024-12-07 04:16:59.000338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.354 [2024-12-07 04:16:59.000350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.354 [2024-12-07 04:16:59.000378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.354 [2024-12-07 04:16:59.000390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:32:16.354 [2024-12-07 04:16:59.000403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.354 [2024-12-07 04:16:59.000414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.130501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.130577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:32:16.613 [2024-12-07 04:16:59.130599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.130613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.234100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.234170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:32:16.613 [2024-12-07 04:16:59.234191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.234204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.234370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.234387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:32:16.613 [2024-12-07 04:16:59.234402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.234414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.234481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.234517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:32:16.613 [2024-12-07 04:16:59.234531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.234543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.234708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.234725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:32:16.613 [2024-12-07 04:16:59.234738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.234751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.234800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.234815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:32:16.613 [2024-12-07 04:16:59.234833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.234845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.234898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.234951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:32:16.613 [2024-12-07 04:16:59.234966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.234979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.235041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:32:16.613 [2024-12-07 04:16:59.235060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:32:16.613 [2024-12-07 04:16:59.235073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:32:16.613 [2024-12-07 04:16:59.235085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:32:16.613 [2024-12-07 04:16:59.235252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 412.372 ms, result 0 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:32:17.992 Remove shared memory files 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84038 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:32:17.992 ************************************ 00:32:17.992 END TEST ftl_upgrade_shutdown 00:32:17.992 ************************************ 00:32:17.992 00:32:17.992 real 1m26.406s 00:32:17.992 user 1m56.361s 00:32:17.992 sys 0m25.239s 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:17.992 04:17:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@14 -- # killprocess 76630 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@954 -- # '[' -z 76630 ']' 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@958 -- # kill -0 76630 00:32:17.992 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76630) - No such process 00:32:17.992 Process with pid 76630 is not found 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76630 is not found' 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84490 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:17.992 04:17:00 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84490 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@835 -- # '[' -z 84490 ']' 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:17.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:17.992 04:17:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:18.251 [2024-12-07 04:17:00.818448] Starting SPDK v25.01-pre git sha1 42416bc2c / DPDK 24.03.0 initialization... 00:32:18.251 [2024-12-07 04:17:00.818607] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84490 ] 00:32:18.510 [2024-12-07 04:17:01.006931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:18.510 [2024-12-07 04:17:01.141865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:19.448 04:17:02 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:19.448 04:17:02 ftl -- common/autotest_common.sh@868 -- # return 0 00:32:19.448 04:17:02 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:32:19.708 nvme0n1 00:32:19.969 04:17:02 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:32:19.969 04:17:02 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:19.969 04:17:02 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:32:19.969 04:17:02 ftl -- ftl/common.sh@28 -- # stores=026dc05d-11a1-41c4-b015-b3a97c48977a 00:32:19.969 04:17:02 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:32:19.969 04:17:02 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 026dc05d-11a1-41c4-b015-b3a97c48977a 00:32:20.228 04:17:02 ftl -- ftl/ftl.sh@23 -- # killprocess 84490 00:32:20.228 04:17:02 ftl -- common/autotest_common.sh@954 -- # '[' -z 84490 ']' 00:32:20.228 04:17:02 ftl -- common/autotest_common.sh@958 -- # kill -0 84490 00:32:20.228 04:17:02 ftl -- common/autotest_common.sh@959 -- # uname 00:32:20.228 04:17:02 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.228 04:17:02 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84490 00:32:20.228 04:17:02 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.229 04:17:02 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.229 killing process with pid 84490 00:32:20.229 04:17:02 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84490' 00:32:20.229 04:17:02 ftl -- common/autotest_common.sh@973 -- # kill 84490 00:32:20.229 04:17:02 ftl -- common/autotest_common.sh@978 -- # wait 84490 00:32:22.882 04:17:05 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:23.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:23.401 Waiting for block devices as requested 00:32:23.401 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:23.659 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:23.659 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:32:23.659 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:32:28.937 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:32:28.937 04:17:11 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:32:28.937 Remove shared memory files 00:32:28.937 04:17:11 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:32:28.937 04:17:11 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:32:28.937 04:17:11 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:32:28.937 04:17:11 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:32:28.937 04:17:11 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:32:28.937 04:17:11 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:32:28.937 ************************************ 00:32:28.937 END TEST ftl 00:32:28.937 ************************************ 00:32:28.937 00:32:28.937 real 11m33.648s 00:32:28.937 user 14m11.617s 00:32:28.937 sys 1m33.378s 00:32:28.937 04:17:11 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:28.937 04:17:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:32:28.937 04:17:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:28.937 04:17:11 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:28.937 04:17:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:28.937 04:17:11 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:28.937 04:17:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:28.937 04:17:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:28.937 04:17:11 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:28.937 04:17:11 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:32:28.937 04:17:11 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:32:28.937 04:17:11 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:32:28.937 04:17:11 -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:28.937 04:17:11 -- common/autotest_common.sh@10 -- # set +x 00:32:28.937 04:17:11 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:32:28.937 04:17:11 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:32:28.937 04:17:11 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:32:28.937 04:17:11 -- common/autotest_common.sh@10 -- # set +x 00:32:31.473 INFO: APP EXITING 00:32:31.473 INFO: killing all VMs 00:32:31.473 INFO: killing vhost app 00:32:31.473 INFO: EXIT DONE 00:32:32.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:32.300 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:32:32.300 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:32:32.562 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:32:32.562 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:32:33.130 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:33.389 Cleaning 00:32:33.389 Removing: /var/run/dpdk/spdk0/config 00:32:33.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:33.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:33.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:33.389 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:33.389 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:33.389 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:33.389 Removing: /var/run/dpdk/spdk0 00:32:33.389 Removing: /var/run/dpdk/spdk_pid57492 00:32:33.389 Removing: /var/run/dpdk/spdk_pid57733 00:32:33.389 Removing: /var/run/dpdk/spdk_pid57962 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58066 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58122 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58260 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58279 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58489 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58595 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58707 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58830 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58938 00:32:33.389 Removing: /var/run/dpdk/spdk_pid58977 00:32:33.389 Removing: /var/run/dpdk/spdk_pid59014 00:32:33.389 Removing: /var/run/dpdk/spdk_pid59090 00:32:33.389 Removing: /var/run/dpdk/spdk_pid59212 00:32:33.389 Removing: /var/run/dpdk/spdk_pid59654 00:32:33.389 Removing: /var/run/dpdk/spdk_pid59737 00:32:33.649 Removing: /var/run/dpdk/spdk_pid59811 00:32:33.649 Removing: /var/run/dpdk/spdk_pid59832 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60000 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60016 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60164 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60191 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60255 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60278 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60343 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60366 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60563 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60600 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60689 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60883 00:32:33.649 Removing: /var/run/dpdk/spdk_pid60978 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61020 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61472 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61570 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61690 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61743 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61769 00:32:33.649 Removing: /var/run/dpdk/spdk_pid61853 00:32:33.649 Removing: /var/run/dpdk/spdk_pid62502 00:32:33.649 Removing: /var/run/dpdk/spdk_pid62544 00:32:33.649 Removing: /var/run/dpdk/spdk_pid63031 00:32:33.649 Removing: /var/run/dpdk/spdk_pid63136 00:32:33.649 Removing: /var/run/dpdk/spdk_pid63256 00:32:33.649 Removing: /var/run/dpdk/spdk_pid63309 00:32:33.649 Removing: /var/run/dpdk/spdk_pid63340 00:32:33.649 Removing: /var/run/dpdk/spdk_pid63365 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65257 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65400 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65409 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65424 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65470 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65474 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65486 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65531 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65535 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65547 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65597 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65601 00:32:33.649 Removing: /var/run/dpdk/spdk_pid65613 00:32:33.649 Removing: /var/run/dpdk/spdk_pid67039 00:32:33.649 Removing: /var/run/dpdk/spdk_pid67152 00:32:33.649 Removing: /var/run/dpdk/spdk_pid68591 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70349 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70429 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70504 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70618 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70711 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70812 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70892 00:32:33.649 Removing: /var/run/dpdk/spdk_pid70977 00:32:33.649 Removing: /var/run/dpdk/spdk_pid71088 00:32:33.649 Removing: /var/run/dpdk/spdk_pid71180 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71281 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71366 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71447 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71557 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71654 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71750 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71835 00:32:33.909 Removing: /var/run/dpdk/spdk_pid71911 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72020 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72114 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72214 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72294 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72374 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72448 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72533 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72644 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72737 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72838 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72918 00:32:33.909 Removing: /var/run/dpdk/spdk_pid72992 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73073 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73157 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73260 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73363 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73508 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73803 00:32:33.909 Removing: /var/run/dpdk/spdk_pid73845 00:32:33.909 Removing: /var/run/dpdk/spdk_pid74300 00:32:33.909 Removing: /var/run/dpdk/spdk_pid74492 00:32:33.909 Removing: /var/run/dpdk/spdk_pid74591 00:32:33.909 Removing: /var/run/dpdk/spdk_pid74706 00:32:33.909 Removing: /var/run/dpdk/spdk_pid74761 00:32:33.909 Removing: /var/run/dpdk/spdk_pid74791 00:32:33.909 Removing: /var/run/dpdk/spdk_pid75087 00:32:33.909 Removing: /var/run/dpdk/spdk_pid75154 00:32:33.909 Removing: /var/run/dpdk/spdk_pid75240 00:32:33.909 Removing: /var/run/dpdk/spdk_pid75673 00:32:33.909 Removing: /var/run/dpdk/spdk_pid75819 00:32:33.909 Removing: /var/run/dpdk/spdk_pid76630 00:32:33.909 Removing: /var/run/dpdk/spdk_pid76779 00:32:33.909 Removing: /var/run/dpdk/spdk_pid76984 00:32:33.909 Removing: /var/run/dpdk/spdk_pid77092 00:32:33.909 Removing: /var/run/dpdk/spdk_pid77478 00:32:33.909 Removing: /var/run/dpdk/spdk_pid77771 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78140 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78339 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78480 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78544 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78687 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78719 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78787 00:32:33.909 Removing: /var/run/dpdk/spdk_pid78997 00:32:33.909 Removing: /var/run/dpdk/spdk_pid79233 00:32:34.168 Removing: /var/run/dpdk/spdk_pid79690 00:32:34.168 Removing: /var/run/dpdk/spdk_pid80133 00:32:34.168 Removing: /var/run/dpdk/spdk_pid80595 00:32:34.168 Removing: /var/run/dpdk/spdk_pid81117 00:32:34.168 Removing: /var/run/dpdk/spdk_pid81266 00:32:34.168 Removing: /var/run/dpdk/spdk_pid81360 00:32:34.168 Removing: /var/run/dpdk/spdk_pid82003 00:32:34.168 Removing: /var/run/dpdk/spdk_pid82074 00:32:34.168 Removing: /var/run/dpdk/spdk_pid82558 00:32:34.168 Removing: /var/run/dpdk/spdk_pid82956 00:32:34.168 Removing: /var/run/dpdk/spdk_pid83479 00:32:34.168 Removing: /var/run/dpdk/spdk_pid83601 00:32:34.168 Removing: /var/run/dpdk/spdk_pid83659 00:32:34.168 Removing: /var/run/dpdk/spdk_pid83719 00:32:34.168 Removing: /var/run/dpdk/spdk_pid83780 00:32:34.168 Removing: /var/run/dpdk/spdk_pid83841 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84038 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84128 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84190 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84260 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84296 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84363 00:32:34.168 Removing: /var/run/dpdk/spdk_pid84490 00:32:34.168 Clean 00:32:34.168 04:17:16 -- common/autotest_common.sh@1453 -- # return 0 00:32:34.168 04:17:16 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:32:34.168 04:17:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.168 04:17:16 -- common/autotest_common.sh@10 -- # set +x 00:32:34.168 04:17:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:32:34.168 04:17:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.168 04:17:16 -- common/autotest_common.sh@10 -- # set +x 00:32:34.427 04:17:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:34.427 04:17:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:32:34.427 04:17:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:32:34.427 04:17:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:32:34.427 04:17:16 -- spdk/autotest.sh@398 -- # hostname 00:32:34.427 04:17:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:32:34.686 geninfo: WARNING: invalid characters removed from testname! 00:33:01.246 04:17:42 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:03.149 04:17:45 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:05.056 04:17:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:07.594 04:17:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:09.531 04:17:51 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:11.433 04:17:54 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:33:13.968 04:17:56 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:13.968 04:17:56 -- spdk/autorun.sh@1 -- $ timing_finish 00:33:13.968 04:17:56 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:33:13.968 04:17:56 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:13.968 04:17:56 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:33:13.968 04:17:56 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:13.968 + [[ -n 5242 ]] 00:33:13.968 + sudo kill 5242 00:33:13.978 [Pipeline] } 00:33:13.993 [Pipeline] // timeout 00:33:13.999 [Pipeline] } 00:33:14.014 [Pipeline] // stage 00:33:14.020 [Pipeline] } 00:33:14.035 [Pipeline] // catchError 00:33:14.045 [Pipeline] stage 00:33:14.048 [Pipeline] { (Stop VM) 00:33:14.084 [Pipeline] sh 00:33:14.365 + vagrant halt 00:33:16.902 ==> default: Halting domain... 00:33:23.492 [Pipeline] sh 00:33:23.774 + vagrant destroy -f 00:33:27.069 ==> default: Removing domain... 00:33:27.082 [Pipeline] sh 00:33:27.364 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:33:27.373 [Pipeline] } 00:33:27.388 [Pipeline] // stage 00:33:27.393 [Pipeline] } 00:33:27.407 [Pipeline] // dir 00:33:27.412 [Pipeline] } 00:33:27.425 [Pipeline] // wrap 00:33:27.431 [Pipeline] } 00:33:27.443 [Pipeline] // catchError 00:33:27.452 [Pipeline] stage 00:33:27.454 [Pipeline] { (Epilogue) 00:33:27.466 [Pipeline] sh 00:33:27.749 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:34.326 [Pipeline] catchError 00:33:34.328 [Pipeline] { 00:33:34.342 [Pipeline] sh 00:33:34.625 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:34.625 Artifacts sizes are good 00:33:34.634 [Pipeline] } 00:33:34.649 [Pipeline] // catchError 00:33:34.662 [Pipeline] archiveArtifacts 00:33:34.670 Archiving artifacts 00:33:34.792 [Pipeline] cleanWs 00:33:34.805 [WS-CLEANUP] Deleting project workspace... 00:33:34.805 [WS-CLEANUP] Deferred wipeout is used... 00:33:34.811 [WS-CLEANUP] done 00:33:34.813 [Pipeline] } 00:33:34.831 [Pipeline] // stage 00:33:34.837 [Pipeline] } 00:33:34.854 [Pipeline] // node 00:33:34.860 [Pipeline] End of Pipeline 00:33:34.897 Finished: SUCCESS